1Math::MatrixReal(3) User Contributed Perl Documentation Math::MatrixReal(3)
2
3
4
6 Math::MatrixReal - Matrix of Reals
7
8 Implements the data type "matrix of reals" (and consequently also "vec‐
9 tor of reals").
10
12 Implements the data type "matrix of reals", which can be used almost
13 like any other basic Perl type thanks to OPERATOR OVERLOADING, i.e.,
14
15 $product = $matrix1 * $matrix2;
16
17 does what you would like it to do (a matrix multiplication).
18
19 Also features many important operations and methods: matrix norm,
20 matrix transposition, matrix inverse, determinant of a matrix, order
21 and numerical condition of a matrix, scalar product of vectors, vector
22 product of vectors, vector length, projection of row and column vec‐
23 tors, a comfortable way for reading in a matrix from a file, the key‐
24 board or your code, and many more.
25
26 Allows to solve linear equation systems using an efficient algorithm
27 known as "L-R-decomposition" and several approximative (iterative)
28 methods.
29
30 Features an implementation of Kleene's algorithm to compute the minimal
31 costs for all paths in a graph with weighted edges (the "weights" being
32 the costs associated with each edge).
33
35 Constructor Methods And Such
36
37 "use Math::MatrixReal;"
38
39 Makes the methods and overloaded operators of this module available
40 to your program.
41
42 · "$new_matrix = new Math::MatrixReal($rows,$columns);"
43
44 The matrix object constructor method. A new matrix of size $rows by
45 $columns will be created, with the value 0.0 for all elements.
46
47 Note that this method is implicitly called by many of the other
48 methods in this module.
49
50 · "$new_matrix = $some_matrix->""new($rows,$columns);"
51
52 Another way of calling the matrix object constructor method.
53
54 Matrix "$some_matrix" is not changed by this in any way.
55
56 · "$new_matrix = $matrix->new_from_cols( [ $column_vec‐
57 tor⎪$array_ref⎪$string, ... ] )"
58
59 Creates a new matrix given a reference to an array of any of the
60 following:
61
62 * column vectors ( n by 1 Math::MatrixReal matrices )
63 * references to arrays
64 * strings properly formatted to create a column with
65 Math::MatrixReal's "new_from_string" command
66
67 You may mix and match these as you wish. However, all must be of
68 the same dimension--no padding happens automatically. Example:
69
70 my $matrix = Math::MatrixReal->new_from_cols( [ [1,2], [3,4] ] );
71 print $matrix;
72
73 will print
74
75 [ 1.000000000000E+00 3.000000000000E+00 ]
76 [ 2.000000000000E+00 4.000000000000E+00 ]
77
78 · "new_from_rows( [ $row_vector⎪$array_ref⎪$string, ... ] )"
79
80 Creates a new matrix given a reference to an array of any of the
81 following:
82
83 * row vectors ( 1 by n Math::MatrixReal matrices )
84 * references to arrays
85 * strings properly formatted to create a row with
86 Math::MatrixReal's "new_from_string" command
87
88 You may mix and match these as you wish. However, all must be of
89 the same dimension--no padding happens automatically. Example:
90
91 my $matrix = Math::MatrixReal->new_from_rows( [ [1,2], [3,4] ] );
92 print $matrix;
93
94 will print
95
96 [ 1.000000000000E+00 2.000000000000E+00 ]
97 [ 3.000000000000E+00 4.000000000000E+00 ]
98
99 · "$new_matrix = Math::MatrixReal->new_diag( $array_ref );"
100
101 This method allows you to create a diagonal matrix by only specify‐
102 ing the diagonal elements. Example:
103
104 $matrix = Math::MatrixReal->new_diag( [ 1,2,3,4 ] );
105 print $matrix;
106
107 will print
108
109 [ 1.000000000000E+00 0.000000000000E+00 0.000000000000E+00 0.000000000000E+00 ]
110 [ 0.000000000000E+00 2.000000000000E+00 0.000000000000E+00 0.000000000000E+00 ]
111 [ 0.000000000000E+00 0.000000000000E+00 3.000000000000E+00 0.000000000000E+00 ]
112 [ 0.000000000000E+00 0.000000000000E+00 0.000000000000E+00 4.000000000000E+00 ]
113
114 · "$new_matrix = Math::MatrixReal->""new_from_string($string);"
115
116 This method allows you to read in a matrix from a string (for
117 instance, from the keyboard, from a file or from your code).
118
119 The syntax is simple: each row must start with ""[ "" and end with
120 "" ]\n"" (""\n"" being the newline character and "" "" a space or
121 tab) and contain one or more numbers, all separated from each other
122 by spaces or tabs.
123
124 Additional spaces or tabs can be added at will, but no comments.
125
126 Examples:
127
128 $string = "[ 1 2 3 ]\n[ 2 2 -1 ]\n[ 1 1 1 ]\n";
129 $matrix = Math::MatrixReal->new_from_string($string);
130 print "$matrix";
131
132 By the way, this prints
133
134 [ 1.000000000000E+00 2.000000000000E+00 3.000000000000E+00 ]
135 [ 2.000000000000E+00 2.000000000000E+00 -1.000000000000E+00 ]
136 [ 1.000000000000E+00 1.000000000000E+00 1.000000000000E+00 ]
137
138 But you can also do this in a much more comfortable way using the
139 shell-like "here-document" syntax:
140
141 $matrix = Math::MatrixReal->new_from_string(<<'MATRIX');
142 [ 1 0 0 0 0 0 1 ]
143 [ 0 1 0 0 0 0 0 ]
144 [ 0 0 1 0 0 0 0 ]
145 [ 0 0 0 1 0 0 0 ]
146 [ 0 0 0 0 1 0 0 ]
147 [ 0 0 0 0 0 1 0 ]
148 [ 1 0 0 0 0 0 -1 ]
149 MATRIX
150
151 You can even use variables in the matrix:
152
153 $c1 = 2 / 3;
154 $c2 = -2 / 5;
155 $c3 = 26 / 9;
156
157 $matrix = Math::MatrixReal->new_from_string(<<"MATRIX");
158
159 [ 3 2 0 ]
160 [ 0 3 2 ]
161 [ $c1 $c2 $c3 ]
162
163 MATRIX
164
165 (Remember that you may use spaces and tabs to format the matrix to
166 your taste)
167
168 Note that this method uses exactly the same representation for a
169 matrix as the "stringify" operator "": this means that you can con‐
170 vert any matrix into a string with "$string = "$matrix";" and read
171 it back in later (for instance from a file!).
172
173 Note however that you may suffer a precision loss in this process
174 because only 13 digits are supported in the mantissa when printed!!
175
176 If the string you supply (or someone else supplies) does not obey
177 the syntax mentioned above, an exception is raised, which can be
178 caught by "eval" as follows:
179
180 print "Please enter your matrix (in one line): ";
181 $string = <STDIN>;
182 $string =~ s/\\n/\n/g;
183 eval { $matrix = Math::MatrixReal->new_from_string($string); };
184 if ($@)
185 {
186 print "$@";
187 # ...
188 # (error handling)
189 }
190 else
191 {
192 # continue...
193 }
194
195 or as follows:
196
197 eval { $matrix = Math::MatrixReal->new_from_string(<<"MATRIX"); };
198 [ 3 2 0 ]
199 [ 0 3 2 ]
200 [ $c1 $c2 $c3 ]
201 MATRIX
202 if ($@)
203 # ...
204
205 Actually, the method shown above for reading a matrix from the key‐
206 board is a little awkward, since you have to enter a lot of "\n"'s
207 for the newlines.
208
209 A better way is shown in this piece of code:
210
211 while (1)
212 {
213 print "\nPlease enter your matrix ";
214 print "(multiple lines, <ctrl-D> = done):\n";
215 eval { $new_matrix =
216 Math::MatrixReal->new_from_string(join('',<STDIN>)); };
217 if ($@)
218 {
219 $@ =~ s/\s+at\b.*?$//;
220 print "${@}Please try again.\n";
221 }
222 else { last; }
223 }
224
225 Possible error messages of the "new_from_string()" method are:
226
227 Math::MatrixReal::new_from_string(): syntax error in input string
228 Math::MatrixReal::new_from_string(): empty input string
229
230 If the input string has rows with varying numbers of columns, the
231 following warning will be printed to STDERR:
232
233 Math::MatrixReal::new_from_string(): missing elements will be set to zero!
234
235 If everything is okay, the method returns an object reference to
236 the (newly allocated) matrix containing the elements you specified.
237
238 · "$new_matrix = $some_matrix->shadow();"
239
240 Returns an object reference to a NEW but EMPTY matrix (filled with
241 zero's) of the SAME SIZE as matrix "$some_matrix".
242
243 Matrix "$some_matrix" is not changed by this in any way.
244
245 · "$matrix1->copy($matrix2);"
246
247 Copies the contents of matrix "$matrix2" to an ALREADY EXISTING
248 matrix "$matrix1" (which must have the same size as matrix
249 "$matrix2"!).
250
251 Matrix "$matrix2" is not changed by this in any way.
252
253 · "$twin_matrix = $some_matrix->clone();"
254
255 Returns an object reference to a NEW matrix of the SAME SIZE as
256 matrix "$some_matrix". The contents of matrix "$some_matrix" have
257 ALREADY BEEN COPIED to the new matrix "$twin_matrix". This is the
258 method that the operator "=" is overloaded to when you type "$a =
259 $b", when $a and $b are matrices.
260
261 Matrix "$some_matrix" is not changed by this in any way.
262
263 Matrix Row, Column and Element operations
264
265 · "$row_vector = $matrix->row($row);"
266
267 This is a projection method which returns an object reference to a
268 NEW matrix (which in fact is a (row) vector since it has only one
269 row) to which row number "$row" of matrix "$matrix" has already
270 been copied.
271
272 Matrix "$matrix" is not changed by this in any way.
273
274 · "$column_vector = $matrix->column($column);"
275
276 This is a projection method which returns an object reference to a
277 NEW matrix (which in fact is a (column) vector since it has only
278 one column) to which column number "$column" of matrix "$matrix"
279 has already been copied.
280
281 Matrix "$matrix" is not changed by this in any way.
282
283 · "$matrix->assign($row,$column,$value);"
284
285 Explicitly assigns a value "$value" to a single element of the
286 matrix "$matrix", located in row "$row" and column "$column",
287 thereby replacing the value previously stored there.
288
289 · "$value = $matrix->""element($row,$column);"
290
291 Returns the value of a specific element of the matrix "$matrix",
292 located in row "$row" and column "$column".
293
294 · "$new_matrix = $matrix->each( \&function )";
295
296 Creates a new matrix by evaluating a code reference on each element
297 of the given matrix. The function is passed the element, the row
298 index and the column index, in that order. The value the function
299 returns ( or the value of the last executed statement ) is the
300 value given to the corresponding element in $new_matrix.
301
302 Example:
303
304 # add 1 to every element in the matrix
305 $matrix = $matrix->each ( sub { (shift) + 1 } );
306
307 Example:
308
309 my $cofactor = $matrix->each( sub { my(undef,$i,$j) = @_;
310 ($i+$j) % 2 == 0 ? $matrix->minor($i,$j)->det()
311 : -1*$matrix->minor($i,$j)->det();
312 } );
313
314 This code needs some explanation. For each element of $matrix, it
315 throws away the actual value and stores the row and column indexes
316 in $i and $j. Then it sets element [$i,$j] in $cofactor to the
317 determinant of "$matrix->minor($i,$j)" if it is an "even" element,
318 or "-1*$matrix->minor($i,$j)" if it is an "odd" element.
319
320 · "$new_matrix = $matrix->each_diag( \&function )";
321
322 Creates a new matrix by evaluating a code reference on each diago‐
323 nal element of the given matrix. The function is passed the ele‐
324 ment, the row index and the column index, in that order. The value
325 the function returns ( or the value of the last executed statement
326 ) is the value given to the corresponding element in $new_matrix.
327
328 · "$matrix->swap_col( $col1, $col2 );"
329
330 This method takes two one-based column numbers and swaps the values
331 of each element in each column. "$matrix->swap_col(2,3)" would
332 replace column 2 in $matrix with column 3, and replace column 3
333 with column 2.
334
335 · "$matrix->swap_row( $row1, $row2 );"
336
337 This method takes two one-based row numbers and swaps the values of
338 each element in each row. "$matrix->swap_row(2,3)" would replace
339 row 2 in $matrix with row 3, and replace row 3 with row 2.
340
341 Matrix Operations
342
343 · "$det = $matrix->det();"
344
345 Returns the determinant of the matrix, without going through the
346 rigamarole of computing a LR decomposition. This method should be
347 much faster than LR decomposition if the matrix is diagonal or tri‐
348 angular. Otherwise, it is just a wrapper for "$matrix->decom‐
349 pose_LR->det_LR". If the determinant is zero, there is no inverse
350 and vice-versa. Only quadratic matrices have determinants.
351
352 · "$inverse = $matrix->inverse();"
353
354 Returns the inverse of a matrix, without going through the rigama‐
355 role of computing a LR decomposition. If no inverse exists, undef
356 is returned and an error is printed via "carp()". This is nothing
357 but a wrapper for "$matrix->decompose_LR->invert_LR".
358
359 · "($rows,$columns) = $matrix->dim();"
360
361 Returns a list of two items, representing the number of rows and
362 columns the given matrix "$matrix" contains.
363
364 · "$norm_one = $matrix->norm_one();"
365
366 Returns the "one"-norm of the given matrix "$matrix".
367
368 The "one"-norm is defined as follows:
369
370 For each column, the sum of the absolute values of the elements in
371 the different rows of that column is calculated. Finally, the maxi‐
372 mum of these sums is returned.
373
374 Note that the "one"-norm and the "maximum"-norm are mathematically
375 equivalent, although for the same matrix they usually yield a dif‐
376 ferent value.
377
378 Therefore, you should only compare values that have been calculated
379 using the same norm!
380
381 Throughout this package, the "one"-norm is (arbitrarily) used for
382 all comparisons, for the sake of uniformity and comparability,
383 except for the iterative methods "solve_GSM()", "solve_SSM()" and
384 "solve_RM()" which use either norm depending on the matrix itself.
385
386 · "$norm_max = $matrix->norm_max();"
387
388 Returns the "maximum"-norm of the given matrix $matrix.
389
390 The "maximum"-norm is defined as follows:
391
392 For each row, the sum of the absolute values of the elements in the
393 different columns of that row is calculated. Finally, the maximum
394 of these sums is returned.
395
396 Note that the "maximum"-norm and the "one"-norm are mathematically
397 equivalent, although for the same matrix they usually yield a dif‐
398 ferent value.
399
400 Therefore, you should only compare values that have been calculated
401 using the same norm!
402
403 Throughout this package, the "one"-norm is (arbitrarily) used for
404 all comparisons, for the sake of uniformity and comparability,
405 except for the iterative methods "solve_GSM()", "solve_SSM()" and
406 "solve_RM()" which use either norm depending on the matrix itself.
407
408 · "$norm_sum = $matrix->norm_sum();"
409
410 This is a very simple norm which is defined as the sum of the abso‐
411 lute values of every element.
412
413 · $p_norm = $matrix->norm_p($n);>
414
415 This function returns the "p-norm" of a vector. The argument $n
416 must be a number greater than or equal to 1 or the string "Inf".
417 The p-norm is defined as (sum(x_i^p))^(1/p). In words, it raised
418 each element to the p-th power, adds them up, and then takes the
419 p-th root of that number. If the string "Inf" is passed, the
420 "infinity-norm" is computed, which is really the limit of the
421 p-norm as p goes to infinity. It is defined as the maximum element
422 of the vector. Also, note that the familiar Euclidean distance
423 between two vectors is just a special case of a p-norm, when p is
424 equal to 2.
425
426 Example:
427 $a = Math::MatrixReal->new_from_cols([[1,2,3]]);
428 $p1 = $a->norm_p(1);
429 $p2 = $a->norm_p(2);
430 $p3 = $a->norm_p(3);
431 $pinf = $a->norm_p("Inf");
432
433 print "(1,2,3,Inf) norm:\n$p1\n$p2\n$p3\n$pinf\n";
434
435 $i1 = $a->new_from_rows([[1,0]]);
436 $i2 = $a->new_from_rows([[0,1]]);
437
438 # this should be sqrt(2) since it is the same as the
439 # hypotenuse of a 1 by 1 right triangle
440
441 $dist = ($i1-$i2)->norm_p(2);
442 print "Distance is $dist, which should be " . sqrt(2) . "\n";
443
444 Output:
445
446 (1,2,3,Inf) norm:
447 6
448 3.74165738677394139
449 3.30192724889462668
450 3
451
452 Distance is 1.41421356237309505, which should be 1.41421356237309505
453
454 · $frob_norm = "$matrix->norm_frobenius();"
455
456 This norm is similar to that of a p-norm where p is 2, except it
457 acts on a matrix, not a vector. Each element of the matrix is
458 squared, this is added up, and then a square root is taken.
459
460 · "$matrix->spectral_radius();"
461
462 Returns the maximum value of the absolute value of all eigenvalues.
463 Currently this computes all eigenvalues, then sifts through them to
464 find the largest in absolute value. Needless to say, this is very
465 inefficient, and in the future an algorithm that computes only the
466 largest eigenvalue may be implemented.
467
468 · "$matrix1->transpose($matrix2);"
469
470 Calculates the transposed matrix of matrix $matrix2 and stores the
471 result in matrix "$matrix1" (which must already exist and have the
472 same size as matrix "$matrix2"!).
473
474 This operation can also be carried out "in-place", i.e., input and
475 output matrix may be identical.
476
477 Transposition is a symmetry operation: imagine you rotate the
478 matrix along the axis of its main diagonal (going through elements
479 (1,1), (2,2), (3,3) and so on) by 180 degrees.
480
481 Another way of looking at it is to say that rows and columns are
482 swapped. In fact the contents of element "(i,j)" are swapped with
483 those of element "(j,i)".
484
485 Note that (especially for vectors) it makes a big difference if you
486 have a row vector, like this:
487
488 [ -1 0 1 ]
489
490 or a column vector, like this:
491
492 [ -1 ]
493 [ 0 ]
494 [ 1 ]
495
496 the one vector being the transposed of the other!
497
498 This is especially true for the matrix product of two vectors:
499
500 [ -1 ]
501 [ -1 0 1 ] * [ 0 ] = [ 2 ] , whereas
502 [ 1 ]
503
504 * [ -1 0 1 ]
505 [ -1 ] [ 1 0 -1 ]
506 [ 0 ] * [ -1 0 1 ] = [ -1 ] [ 1 0 -1 ] = [ 0 0 0 ]
507 [ 1 ] [ 0 ] [ 0 0 0 ] [ -1 0 1 ]
508 [ 1 ] [ -1 0 1 ]
509
510 So be careful about what you really mean!
511
512 Hint: throughout this module, whenever a vector is explicitly
513 required for input, a COLUMN vector is expected!
514
515 · "$trace = $matrix->trace();"
516
517 This returns the trace of the matrix, which is defined as the sum
518 of the diagonal elements. The matrix must be quadratic.
519
520 · "$minor = $matrix->minor($row,$col);"
521
522 Returns the minor matrix corresponding to $row and $col. $matrix
523 must be quadratic. If $matrix is n rows by n cols, the minor of
524 $row and $col will be an (n-1) by (n-1) matrix. The minor is
525 defined as crossing out the row and the col specified and returning
526 the remaining rows and columns as a matrix. This method is used by
527 "cofactor()".
528
529 · "$cofactor = $matrix->cofactor();"
530
531 The cofactor matrix is constructed as follows:
532
533 For each element, cross out the row and column that it sits in.
534 Now, take the determinant of the matrix that is left in the other
535 rows and columns. Multiply the determinant by (-1)^(i+j), where i
536 is the row index, and j is the column index. Replace the given
537 element with this value.
538
539 The cofactor matrix can be used to find the inverse of the matrix.
540 One formula for the inverse of a matrix is the cofactor matrix
541 transposed divided by the original determinant of the matrix.
542
543 The following two inverses should be exactly the same:
544
545 my $inverse1 = $matrix->inverse;
546 my $inverse2 = ~($matrix->cofactor)->each( sub { (shift)/$matrix->det() } );
547
548 Caveat: Although the cofactor matrix is simple algorithm to compute
549 the inverse of a matrix, and can be used with pencil and paper for
550 small matrices, it is comically slower than the native "inverse()"
551 function. Here is a small benchmark:
552
553 # $matrix1 is 15x15
554 $det = $matrix1->det;
555 timethese( 10,
556 {'inverse' => sub { $matrix1->inverse(); },
557 'cofactor' => sub { (~$matrix1->cofactor)->each ( sub { (shift)/$det; } ) }
558 } );
559
560 Benchmark: timing 10 iterations of LR, cofactor, inverse...
561 inverse: 1 wallclock secs ( 0.56 usr + 0.00 sys = 0.56 CPU) @ 17.86/s (n=10)
562 cofactor: 36 wallclock secs (36.62 usr + 0.01 sys = 36.63 CPU) @ 0.27/s (n=10)
563
564 · "$adjoint = $matrix->adjoint();"
565
566 The adjoint is just the transpose of the cofactor matrix. This
567 method is just an alias for " ~($matrix->cofactor)".
568
569 Arithmetic Operations
570
571 · "$matrix1->add($matrix2,$matrix3);"
572
573 Calculates the sum of matrix "$matrix2" and matrix "$matrix3" and
574 stores the result in matrix "$matrix1" (which must already exist
575 and have the same size as matrix "$matrix2" and matrix
576 "$matrix3"!).
577
578 This operation can also be carried out "in-place", i.e., the output
579 and one (or both) of the input matrices may be identical.
580
581 · "$matrix1->subtract($matrix2,$matrix3);"
582
583 Calculates the difference of matrix "$matrix2" minus matrix
584 "$matrix3" and stores the result in matrix "$matrix1" (which must
585 already exist and have the same size as matrix "$matrix2" and
586 matrix "$matrix3"!).
587
588 This operation can also be carried out "in-place", i.e., the output
589 and one (or both) of the input matrices may be identical.
590
591 Note that this operation is the same as
592 "$matrix1->add($matrix2,-$matrix3);", although the latter is a lit‐
593 tle less efficient.
594
595 · "$matrix1->multiply_scalar($matrix2,$scalar);"
596
597 Calculates the product of matrix "$matrix2" and the number
598 "$scalar" (i.e., multiplies each element of matrix "$matrix2" with
599 the factor "$scalar") and stores the result in matrix "$matrix1"
600 (which must already exist and have the same size as matrix
601 "$matrix2"!).
602
603 This operation can also be carried out "in-place", i.e., input and
604 output matrix may be identical.
605
606 · "$product_matrix = $matrix1->multiply($matrix2);"
607
608 Calculates the product of matrix "$matrix1" and matrix "$matrix2"
609 and returns an object reference to a new matrix "$product_matrix"
610 in which the result of this operation has been stored.
611
612 Note that the dimensions of the two matrices "$matrix1" and
613 "$matrix2" (i.e., their numbers of rows and columns) must harmonize
614 in the following way (example):
615
616 [ 2 2 ]
617 [ 2 2 ]
618 [ 2 2 ]
619
620 [ 1 1 1 ] [ * * ]
621 [ 1 1 1 ] [ * * ]
622 [ 1 1 1 ] [ * * ]
623 [ 1 1 1 ] [ * * ]
624
625 I.e., the number of columns of matrix "$matrix1" has to be the same
626 as the number of rows of matrix "$matrix2".
627
628 The number of rows and columns of the resulting matrix "$prod‐
629 uct_matrix" is determined by the number of rows of matrix
630 "$matrix1" and the number of columns of matrix "$matrix2", respec‐
631 tively.
632
633 · "$matrix1->negate($matrix2);"
634
635 Calculates the negative of matrix "$matrix2" (i.e., multiplies all
636 elements with "-1") and stores the result in matrix "$matrix1"
637 (which must already exist and have the same size as matrix
638 "$matrix2"!).
639
640 This operation can also be carried out "in-place", i.e., input and
641 output matrix may be identical.
642
643 · "$matrix_to_power = $matrix1->exponent($integer);"
644
645 Raises the matrix to the $integer power. Obviously, $integer must
646 be an integer. If it is zero, the identity matrix is returned. If a
647 negative integer is given, the inverse will be computed (if it
648 exists) and then raised the the absolute value of $integer. The
649 matrix must be quadratic.
650
651 Boolean Matrix Operations
652
653 · "$matrix->is_quadratic();"
654
655 Returns a boolean value indicating if the given matrix is quadratic
656 (also know as "square" or "n by n"). A matrix is quadratic if it
657 has the same number of rows as it does columns.
658
659 · "$matrix->is_square();"
660
661 This is an alias for "is_quadratic()".
662
663 · "$matrix->is_symmetric();"
664
665 Returns a boolean value indicating if the given matrix is symmet‐
666 ric. By definition, a matrix is symmetric if and only if
667 (M[i,j]=M[j,i]). This is equivalent to "($matrix == ~$matrix)" but
668 without memory allocation. Only quadratic matrices can be symmet‐
669 ric.
670
671 Notes: A symmetric matrix always has real eigenvalues/eigenvectors.
672 A matrix plus its transpose is always symmetric.
673
674 · "$matrix->is_skew_symmetric();"
675
676 Returns a boolean value indicating if the given matrix is skew sym‐
677 metric. By definition, a matrix is symmetric if and only if
678 (M[i,j]=-M[j,i]). This is equivalent to "($matrix == -(~$matrix))"
679 but without memory allocation. Only quadratic matrices can be skew
680 symmetric.
681
682 · "$matrix->is_diagonal();"
683
684 Returns a boolean value indicating if the given matrix is diagonal,
685 i.e. all of the nonzero elements are on the main diagonal. Only
686 quadratic matrices can be diagonal.
687
688 · "$matrix->is_tridiagonal();"
689
690 Returns a boolean value indicating if the given matrix is tridiago‐
691 nal, i.e. all of the nonzero elements are on the main diagonal or
692 the diagonals above and below the main diagonal. Only quadratic
693 matrices can be tridiagonal.
694
695 · "$matrix->is_upper_triangular();"
696
697 Returns a boolean value indicating if the given matrix is upper
698 triangular, i.e. all of the nonzero elements not on the main diago‐
699 nal are above it. Only quadratic matrices can be upper triangular.
700 Note: diagonal matrices are both upper and lower triangular.
701
702 · "$matrix->is_lower_triangular();"
703
704 Returns a boolean value indicating if the given matrix is lower
705 triangular, i.e. all of the nonzero elements not on the main diago‐
706 nal are below it. Only quadratic matrices can be lower triangular.
707 Note: diagonal matrices are both upper and lower triangular.
708
709 · "$matrix->is_orthogonal();"
710
711 Returns a boolean value indicating if the given matrix is orthogo‐
712 nal. An orthogonal matrix is has the property that the transpose
713 equals the inverse of the matrix. Instead of computing each and
714 comparing them, this method multiplies the matrix by it's trans‐
715 pose, and returns true if this turns out to be the identity matrix,
716 false otherwise. Only quadratic matrices can orthogonal.
717
718 · "$matrix->is_binary();"
719
720 Returns a boolean value indicating if the given matrix is binary.
721 A matrix is binary if it contains only zeroes or ones.
722
723 · "$matrix->is_gramian();"
724
725 Returns a boolean value indicating if the give matrix is Gramian.
726 A matrix $A is Gramian if and only if there exists a square matrix
727 $B such that "$A = ~$B*$B". This is equivalent to checking if $A is
728 symmetric and has all nonnegative eigenvalues, which is what
729 Math::MatrixReal uses to check for this property.
730
731 · "$matrix->is_LR();"
732
733 Returns a boolean value indicating if the matrix is an LR decompo‐
734 sition matrix.
735
736 · "$matrix->is_positive();"
737
738 Returns a boolean value indicating if the matrix contains only pos‐
739 itive entries. Note that a zero entry is not positive and will
740 cause "is_positive()" to return false.
741
742 · "$matrix->is_negative();"
743
744 Returns a boolean value indicating if the matrix contains only neg‐
745 ative entries. Note that a zero entry is not negative and will
746 cause "is_negative()" to return false.
747
748 · "$matrix->is_periodic($k);"
749
750 Returns a boolean value indicating if the matrix is periodic with
751 period $k. This is true if "$matrix ** ($k+1) == $matrix". When
752 "$k == 1", this reduces down to the "is_idempotent()" function.
753
754 · "$matrix->is_idempotent();"
755
756 Returns a boolean value indicating if the matrix is idempotent,
757 which is defined as the square of the matrix being equal to the
758 original matrix, i.e "$matrix ** 2 == $matrix".
759
760 · "$matrix->is_row_vector();"
761
762 Returns a boolean value indicating if the matrix is a row vector.
763 A row vector is a matrix which is 1xn. Note that the 1x1 matrix is
764 both a row and column vector.
765
766 · "$matrix->is_col_vector();"
767
768 Returns a boolean value indicating if the matrix is a col vector.
769 A col vector is a matrix which is nx1. Note that the 1x1 matrix is
770 both a row and column vector.
771
772 Eigensystems
773
774 · "($l, $V) = $matrix->sym_diagonalize();"
775
776 This method performs the diagonalization of the quadratic symmet‐
777 ric matrix M stored in $matrix. On output, l is a column vector
778 containing all the eigenvalues of M and V is an orthogonal matrix
779 which columns are the corresponding normalized eigenvectors. The
780 primary property of an eigenvalue l and an eigenvector x is of
781 course that: M * x = l * x.
782
783 The method uses a Householder reduction to tridiagonal form fol‐
784 lowed by a QL algoritm with implicit shifts on this tridiagonal.
785 (The tridiagonal matrix is kept internally in a compact form in
786 this routine to save memory.) In fact, this routine wraps the
787 householder() and tri_diagonalize() methods described below when
788 their intermediate results are not desired. The overall algo‐
789 rithmic complexity of this technique is O(N^3). According to sev‐
790 eral books, the coefficient hidden by the 'O' is one of the best
791 possible for general (symmetric) matrixes.
792
793 · "($T, $Q) = $matrix->householder();"
794
795 This method performs the Householder algorithm which reduces the
796 n by n real symmetric matrix M contained in $matrix to tridiago‐
797 nal form. On output, T is a symmetric tridiagonal matrix (only
798 diagonal and off-diagonal elements are non-zero) and Q is an
799 orthogonal matrix performing the tranformation between M and T
800 ("$M == $Q * $T * ~$Q").
801
802 · "($l, $V) = $T->tri_diagonalize([$Q]);"
803
804 This method diagonalizes the symmetric tridiagonal matrix T. On
805 output, $l and $V are similar to the output values described for
806 sym_diagonalize().
807
808 The optional argument $Q corresponds to an orthogonal transforma‐
809 tion matrix Q that should be used additionally during V (eigen‐
810 vectors) computation. It should be supplied if the desired eigen‐
811 vectors correspond to a more general symmetric matrix M previ‐
812 ously reduced by the householder() method, not a mere tridiago‐
813 nal. If T is really a tridiagonal matrix, Q can be omitted (it
814 will be internally created in fact as an identity matrix). The
815 method uses a QL algorithm (with implicit shifts).
816
817 · "$l = $matrix->sym_eigenvalues();"
818
819 This method computes the eigenvalues of the quadratic symmetric
820 matrix M stored in $matrix. On output, l is a column vector con‐
821 taining all the eigenvalues of M. Eigenvectors are not computed
822 (on the contrary of "sym_diagonalize()") and this method is more
823 efficient (even though it uses a similar algorithm with two
824 phases). However, understand that the algorithmic complexity of
825 this technique is still also O(N^3). But the coefficient hidden
826 by the 'O' is better by a factor of..., well, see your benchmark,
827 it's wiser.
828
829 This routine wraps the householder_tridiagonal() and tri_eigen‐
830 values() methods described below when the intermediate tridiago‐
831 nal matrix is not needed.
832
833 · "$T = $matrix->householder_tridiagonal();"
834
835 This method performs the Householder algorithm which reduces the
836 n by n real symmetric matrix M contained in $matrix to tridiago‐
837 nal form. On output, T is the obtained symmetric tridiagonal
838 matrix (only diagonal and off-diagonal elements are non-zero).
839 The operation is similar to the householder() method, but poten‐
840 tially a little more efficient as the transformation matrix is
841 not computed.
842
843 · "$l = $T->tri_eigenvalues();"
844
845 This method computesthe eigenvalues of the symmetric tridiagonal
846 matrix T. On output, $l is a vector containing the eigenvalues
847 (similar to "sym_eigenvalues()"). This method is much more effi‐
848 cient than tri_diagonalize() when eigenvectors are not needed.
849
850 Miscellaneous
851
852 · "$matrix->zero();"
853
854 Assigns a zero to every element of the matrix "$matrix", i.e.,
855 erases all values previously stored there, thereby effectively
856 transforming the matrix into a "zero"-matrix or "null"-matrix, the
857 neutral element of the addition operation in a Ring.
858
859 (For instance the (quadratic) matrices with "n" rows and columns
860 and matrix addition and multiplication form a Ring. Most prominent
861 characteristic of a Ring is that multiplication is not commutative,
862 i.e., in general, ""matrix1 * matrix2"" is not the same as
863 ""matrix2 * matrix1""!)
864
865 · "$matrix->one();"
866
867 Assigns one's to the elements on the main diagonal (elements (1,1),
868 (2,2), (3,3) and so on) of matrix "$matrix" and zero's to all oth‐
869 ers, thereby erasing all values previously stored there and trans‐
870 forming the matrix into a "one"-matrix, the neutral element of the
871 multiplication operation in a Ring.
872
873 (If the matrix is quadratic (which this method doesn't require,
874 though), then multiplying this matrix with itself yields this same
875 matrix again, and multiplying it with some other matrix leaves that
876 other matrix unchanged!)
877
878 · "$latex_string = $matrix->as_latex( align=> "c", format => "%s",
879 name => "" );"
880
881 This function returns the matrix as a LaTeX string. It takes a hash
882 as an argument which is used to control the style of the output.
883 The hash element "align" may be "c","l" or "r", corresponding to
884 center, left and right, respectively. The "format" element is a
885 format string that is given to "sprintf" to control the style of
886 number format, such a floating point or scientific notation. The
887 "name" element can be used so that a LaTeX string of "$name = " is
888 prepended to the string.
889
890 Example:
891
892 my $a = Math::MatrixReal->new_from_cols([[ 1.234, 5.678, 9.1011],[1,2,3]] );
893 print $a->as_latex( ( format => "%.2f", align => "l",name => "A" ) );
894
895 Output:
896 $A = $ $
897 \left( \begin{array}{ll}
898 1.23&1.00 \\
899 5.68&2.00 \\
900 9.10&3.00
901 \end{array} \right)
902 $
903
904 · "$yacas_string = $matrix->as_yacas( format => "%s", name => "",
905 semi => 0 );"
906
907 This function returns the matrix as a string that can be read by
908 Yacas. It takes a hash as an an argument which controls the style
909 of the output. The "format" element is a format string that is
910 given to "sprintf" to control the style of number format, such a
911 floating point or scientific notation. The "name" element can be
912 used so that "$name = " is prepended to the string. The <semi> ele‐
913 ment can be set to 1 to that a semicolon is appended (so Matlab
914 does not print out the matrix.)
915
916 Example:
917
918 $a = Math::MatrixReal->new_from_cols([[ 1.234, 5.678, 9.1011],[1,2,3]] );
919 print $a->as_yacas( ( format => "%.2f", align => "l",name => "A" ) );
920
921 Output:
922
923 A := {{1.23,1.00},{5.68,2.00},{9.10,3.00}}
924
925 · "$matlab_string = $matrix->as_matlab( format => "%s", name => "",
926 semi => 0 );"
927
928 This function returns the matrix as a string that can be read by
929 Matlab. It takes a hash as an an argument which controls the style
930 of the output. The "format" element is a format string that is
931 given to "sprintf" to control the style of number format, such a
932 floating point or scientific notation. The "name" element can be
933 used so that "$name = " is prepended to the string. The <semi> ele‐
934 ment can be set to 1 to that a semicolon is appended (so Matlab
935 does not print out the matrix.)
936
937 Example:
938
939 my $a = Math::MatrixReal->new_from_rows([[ 1.234, 5.678, 9.1011],[1,2,3]] );
940 print $a->as_matlab( ( format => "%.3f", name => "A",semi => 1 ) );
941
942 Output:
943 A = [ 1.234 5.678 9.101;
944 1.000 2.000 3.000];
945
946 · "$scilab_string = $matrix->as_scilab( format => "%s", name => "",
947 semi => 0 );"
948
949 This function is just an alias for "as_matlab()", since both Scilab
950 and Matlab have the same matrix format.
951
952 · "$minimum = Math::MatrixReal::min($number1,$number2);"
953
954 Returns the minimum of the two numbers ""number1"" and ""number2"".
955
956 · "$maximum = Math::MatrixReal::max($number1,$number2);"
957
958 Returns the maximum of the two numbers ""number1"" and ""number2"".
959
960 · "$minimal_cost_matrix = $cost_matrix->kleene();"
961
962 Copies the matrix "$cost_matrix" (which has to be quadratic!) to a
963 new matrix of the same size (i.e., "clones" the input matrix) and
964 applies Kleene's algorithm to it.
965
966 See Math::Kleene(3) for more details about this algorithm!
967
968 The method returns an object reference to the new matrix.
969
970 Matrix "$cost_matrix" is not changed by this method in any way.
971
972 · "($norm_matrix,$norm_vector) = $matrix->normalize($vector);"
973
974 This method is used to improve the numerical stability when solving
975 linear equation systems.
976
977 Suppose you have a matrix "A" and a vector "b" and you want to find
978 out a vector "x" so that "A * x = b", i.e., the vector "x" which
979 solves the equation system represented by the matrix "A" and the
980 vector "b".
981
982 Applying this method to the pair (A,b) yields a pair (A',b') where
983 each row has been divided by (the absolute value of) the greatest
984 coefficient appearing in that row. So this coefficient becomes
985 equal to "1" (or "-1") in the new pair (A',b') (all others become
986 smaller than one and greater than minus one).
987
988 Note that this operation does not change the equation system itself
989 because the same division is carried out on either side of the
990 equation sign!
991
992 The method requires a quadratic (!) matrix "$matrix" and a vector
993 "$vector" for input (the vector must be a column vector with the
994 same number of rows as the input matrix) and returns a list of two
995 items which are object references to a new matrix and a new vector,
996 in this order.
997
998 The output matrix and vector are clones of the input matrix and
999 vector to which the operation explained above has been applied.
1000
1001 The input matrix and vector are not changed by this in any way.
1002
1003 Example of how this method can affect the result of the methods to
1004 solve equation systems (explained immediately below following this
1005 method):
1006
1007 Consider the following little program:
1008
1009 #!perl -w
1010
1011 use Math::MatrixReal qw(new_from_string);
1012
1013 $A = Math::MatrixReal->new_from_string(<<"MATRIX");
1014 [ 1 2 3 ]
1015 [ 5 7 11 ]
1016 [ 23 19 13 ]
1017 MATRIX
1018
1019 $b = Math::MatrixReal->new_from_string(<<"MATRIX");
1020 [ 0 ]
1021 [ 1 ]
1022 [ 29 ]
1023 MATRIX
1024
1025 $LR = $A->decompose_LR();
1026 if (($dim,$x,$B) = $LR->solve_LR($b))
1027 {
1028 $test = $A * $x;
1029 print "x = \n$x";
1030 print "A * x = \n$test";
1031 }
1032
1033 ($A_,$b_) = $A->normalize($b);
1034
1035 $LR = $A_->decompose_LR();
1036 if (($dim,$x,$B) = $LR->solve_LR($b_))
1037 {
1038 $test = $A * $x;
1039 print "x = \n$x";
1040 print "A * x = \n$test";
1041 }
1042
1043 This will print:
1044
1045 x =
1046 [ 1.000000000000E+00 ]
1047 [ 1.000000000000E+00 ]
1048 [ -1.000000000000E+00 ]
1049 A * x =
1050 [ 4.440892098501E-16 ]
1051 [ 1.000000000000E+00 ]
1052 [ 2.900000000000E+01 ]
1053 x =
1054 [ 1.000000000000E+00 ]
1055 [ 1.000000000000E+00 ]
1056 [ -1.000000000000E+00 ]
1057 A * x =
1058 [ 0.000000000000E+00 ]
1059 [ 1.000000000000E+00 ]
1060 [ 2.900000000000E+01 ]
1061
1062 You can see that in the second example (where "normalize()" has
1063 been used), the result is "better", i.e., more accurate!
1064
1065 · "$LR_matrix = $matrix->decompose_LR();"
1066
1067 This method is needed to solve linear equation systems.
1068
1069 Suppose you have a matrix "A" and a vector "b" and you want to find
1070 out a vector "x" so that "A * x = b", i.e., the vector "x" which
1071 solves the equation system represented by the matrix "A" and the
1072 vector "b".
1073
1074 You might also have a matrix "A" and a whole bunch of different
1075 vectors "b1".."bk" for which you need to find vectors "x1".."xk" so
1076 that "A * xi = bi", for "i=1..k".
1077
1078 Using Gaussian transformations (multiplying a row or column with a
1079 factor, swapping two rows or two columns and adding a multiple of
1080 one row or column to another), it is possible to decompose any
1081 matrix "A" into two triangular matrices, called "L" and "R" (for
1082 "Left" and "Right").
1083
1084 "L" has one's on the main diagonal (the elements (1,1), (2,2),
1085 (3,3) and so so), non-zero values to the left and below of the main
1086 diagonal and all zero's in the upper right half of the matrix.
1087
1088 "R" has non-zero values on the main diagonal as well as to the
1089 right and above of the main diagonal and all zero's in the lower
1090 left half of the matrix, as follows:
1091
1092 [ 1 0 0 0 0 ] [ x x x x x ]
1093 [ x 1 0 0 0 ] [ 0 x x x x ]
1094 L = [ x x 1 0 0 ] R = [ 0 0 x x x ]
1095 [ x x x 1 0 ] [ 0 0 0 x x ]
1096 [ x x x x 1 ] [ 0 0 0 0 x ]
1097
1098 Note that ""L * R"" is equivalent to matrix "A" in the sense that
1099 "L * R * x = b <==> A * x = b" for all vectors "x", leaving out
1100 of account permutations of the rows and columns (these are taken
1101 care of "magically" by this module!) and numerical errors.
1102
1103 Trick:
1104
1105 Because we know that "L" has one's on its main diagonal, we can
1106 store both matrices together in the same array without information
1107 loss! I.e.,
1108
1109 [ R R R R R ]
1110 [ L R R R R ]
1111 LR = [ L L R R R ]
1112 [ L L L R R ]
1113 [ L L L L R ]
1114
1115 Beware, though, that "LR" and ""L * R"" are not the same!!!
1116
1117 Note also that for the same reason, you cannot apply the method
1118 "normalize()" to an "LR" decomposition matrix. Trying to do so will
1119 yield meaningless rubbish!
1120
1121 (You need to apply "normalize()" to each pair (Ai,bi) BEFORE decom‐
1122 posing the matrix "Ai'"!)
1123
1124 Now what does all this help us in solving linear equation systems?
1125
1126 It helps us because a triangular matrix is the next best thing that
1127 can happen to us besides a diagonal matrix (a matrix that has non-
1128 zero values only on its main diagonal - in which case the solution
1129 is trivial, simply divide ""b[i]"" by ""A[i,i]"" to get ""x[i]""!).
1130
1131 To find the solution to our problem ""A * x = b"", we divide this
1132 problem in parts: instead of solving "A * x = b" directly, we first
1133 decompose "A" into "L" and "R" and then solve ""L * y = b"" and
1134 finally ""R * x = y"" (motto: divide and rule!).
1135
1136 From the illustration above it is clear that solving ""L * y = b""
1137 and ""R * x = y"" is straightforward: we immediately know that
1138 "y[1] = b[1]". We then deduce swiftly that
1139
1140 y[2] = b[2] - L[2,1] * y[1]
1141
1142 (and we know ""y[1]"" by now!), that
1143
1144 y[3] = b[3] - L[3,1] * y[1] - L[3,2] * y[2]
1145
1146 and so on.
1147
1148 Having effortlessly calculated the vector "y", we now proceed to
1149 calculate the vector "x" in a similar fashion: we see immediately
1150 that "x[n] = y[n] / R[n,n]". It follows that
1151
1152 x[n-1] = ( y[n-1] - R[n-1,n] * x[n] ) / R[n-1,n-1]
1153
1154 and
1155
1156 x[n-2] = ( y[n-2] - R[n-2,n-1] * x[n-1] - R[n-2,n] * x[n] )
1157 / R[n-2,n-2]
1158
1159 and so on.
1160
1161 You can see that - especially when you have many vectors "b1".."bk"
1162 for which you are searching solutions to "A * xi = bi" - this
1163 scheme is much more efficient than a straightforward, "brute force"
1164 approach.
1165
1166 This method requires a quadratic matrix as its input matrix.
1167
1168 If you don't have that many equations, fill up with zero's (i.e.,
1169 do nothing to fill the superfluous rows if it's a "fresh" matrix,
1170 i.e., a matrix that has been created with "new()" or "shadow()").
1171
1172 The method returns an object reference to a new matrix containing
1173 the matrices "L" and "R".
1174
1175 The input matrix is not changed by this method in any way.
1176
1177 Note that you can "copy()" or "clone()" the result of this method
1178 without losing its "magical" properties (for instance concerning
1179 the hidden permutations of its rows and columns).
1180
1181 However, as soon as you are applying any method that alters the
1182 contents of the matrix, its "magical" properties are stripped off,
1183 and the matrix immediately reverts to an "ordinary" matrix (with
1184 the values it just happens to contain at that moment, be they mean‐
1185 ingful as an ordinary matrix or not!).
1186
1187 · "($dimension,$x_vector,$base_matrix) =
1188 $LR_matrix""->""solve_LR($b_vector);"
1189
1190 Use this method to actually solve an equation system.
1191
1192 Matrix "$LR_matrix" must be a (quadratic) matrix returned by the
1193 method "decompose_LR()", the LR decomposition matrix of the matrix
1194 "A" of your equation system "A * x = b".
1195
1196 The input vector "$b_vector" is the vector "b" in your equation
1197 system "A * x = b", which must be a column vector and have the same
1198 number of rows as the input matrix "$LR_matrix".
1199
1200 The method returns a list of three items if a solution exists or an
1201 empty list otherwise (!).
1202
1203 Therefore, you should always use this method like this:
1204
1205 if ( ($dim,$x_vec,$base) = $LR->solve_LR($b_vec) )
1206 {
1207 # do something with the solution...
1208 }
1209 else
1210 {
1211 # do something with the fact that there is no solution...
1212 }
1213
1214 The three items returned are: the dimension "$dimension" of the
1215 solution space (which is zero if only one solution exists, one if
1216 the solution is a straight line, two if the solution is a plane,
1217 and so on), the solution vector "$x_vector" (which is the vector
1218 "x" of your equation system "A * x = b") and a matrix
1219 "$base_matrix" representing a base of the solution space (a set of
1220 vectors which put up the solution space like the spokes of an
1221 umbrella).
1222
1223 Only the first "$dimension" columns of this base matrix actually
1224 contain entries, the remaining columns are all zero.
1225
1226 Now what is all this stuff with that "base" good for?
1227
1228 The output vector "x" is ALWAYS a solution of your equation system
1229 "A * x = b".
1230
1231 But also any vector "$vector"
1232
1233 $vector = $x_vector->clone();
1234
1235 $machine_infinity = 1E+99; # or something like that
1236
1237 for ( $i = 1; $i <= $dimension; $i++ )
1238 {
1239 $vector += rand($machine_infinity) * $base_matrix->column($i);
1240 }
1241
1242 is a solution to your problem "A * x = b", i.e., if "$A_matrix"
1243 contains your matrix "A", then
1244
1245 print abs( $A_matrix * $vector - $b_vector ), "\n";
1246
1247 should print a number around 1E-16 or so!
1248
1249 By the way, note that you can actually calculate those vectors
1250 "$vector" a little more efficient as follows:
1251
1252 $rand_vector = $x_vector->shadow();
1253
1254 $machine_infinity = 1E+99; # or something like that
1255
1256 for ( $i = 1; $i <= $dimension; $i++ )
1257 {
1258 $rand_vector->assign($i,1, rand($machine_infinity) );
1259 }
1260
1261 $vector = $x_vector + ( $base_matrix * $rand_vector );
1262
1263 Note that the input matrix and vector are not changed by this
1264 method in any way.
1265
1266 · "$inverse_matrix = $LR_matrix->invert_LR();"
1267
1268 Use this method to calculate the inverse of a given matrix
1269 "$LR_matrix", which must be a (quadratic) matrix returned by the
1270 method "decompose_LR()".
1271
1272 The method returns an object reference to a new matrix of the same
1273 size as the input matrix containing the inverse of the matrix that
1274 you initially fed into "decompose_LR()" IF THE INVERSE EXISTS, or
1275 an empty list otherwise.
1276
1277 Therefore, you should always use this method in the following way:
1278
1279 if ( $inverse_matrix = $LR->invert_LR() )
1280 {
1281 # do something with the inverse matrix...
1282 }
1283 else
1284 {
1285 # do something with the fact that there is no inverse matrix...
1286 }
1287
1288 Note that by definition (disregarding numerical errors), the prod‐
1289 uct of the initial matrix and its inverse (or vice-versa) is always
1290 a matrix containing one's on the main diagonal (elements (1,1),
1291 (2,2), (3,3) and so on) and zero's elsewhere.
1292
1293 The input matrix is not changed by this method in any way.
1294
1295 · "$condition = $matrix->condition($inverse_matrix);"
1296
1297 In fact this method is just a shortcut for
1298
1299 abs($matrix) * abs($inverse_matrix)
1300
1301 Both input matrices must be quadratic and have the same size, and
1302 the result is meaningful only if one of them is the inverse of the
1303 other (for instance, as returned by the method "invert_LR()").
1304
1305 The number returned is a measure of the "condition" of the given
1306 matrix "$matrix", i.e., a measure of the numerical stability of the
1307 matrix.
1308
1309 This number is always positive, and the smaller its value, the bet‐
1310 ter the condition of the matrix (the better the stability of all
1311 subsequent computations carried out using this matrix).
1312
1313 Numerical stability means for example that if
1314
1315 abs( $vec_correct - $vec_with_error ) < $epsilon
1316
1317 holds, there must be a "$delta" which doesn't depend on the vector
1318 "$vec_correct" (nor "$vec_with_error", by the way) so that
1319
1320 abs( $matrix * $vec_correct - $matrix * $vec_with_error ) < $delta
1321
1322 also holds.
1323
1324 · "$determinant = $LR_matrix->det_LR();"
1325
1326 Calculates the determinant of a matrix, whose LR decomposition
1327 matrix "$LR_matrix" must be given (which must be a (quadratic)
1328 matrix returned by the method "decompose_LR()").
1329
1330 In fact the determinant is a by-product of the LR decomposition: It
1331 is (in principle, that is, except for the sign) simply the product
1332 of the elements on the main diagonal (elements (1,1), (2,2), (3,3)
1333 and so on) of the LR decomposition matrix.
1334
1335 (The sign is taken care of "magically" by this module)
1336
1337 · "$order = $LR_matrix->order_LR();"
1338
1339 Calculates the order (called "Rang" in German) of a matrix, whose
1340 LR decomposition matrix "$LR_matrix" must be given (which must be a
1341 (quadratic) matrix returned by the method "decompose_LR()").
1342
1343 This number is a measure of the number of linear independent row
1344 and column vectors (= number of linear independent equations in the
1345 case of a matrix representing an equation system) of the matrix
1346 that was initially fed into "decompose_LR()".
1347
1348 If "n" is the number of rows and columns of the (quadratic!)
1349 matrix, then "n - order" is the dimension of the solution space of
1350 the associated equation system.
1351
1352 · "$rank = $LR_matrix->rank_LR();"
1353
1354 This is an alias for the "order_LR()" function. The "order" is usu‐
1355 ally called the "rank" in the United States.
1356
1357 · "$scalar_product = $vector1->scalar_product($vector2);"
1358
1359 Returns the scalar product of vector "$vector1" and vector "$vec‐
1360 tor2".
1361
1362 Both vectors must be column vectors (i.e., a matrix having several
1363 rows but only one column).
1364
1365 This is a (more efficient!) shortcut for
1366
1367 $temp = ~$vector1 * $vector2;
1368 $scalar_product = $temp->element(1,1);
1369
1370 or the sum "i=1..n" of the products "vector1[i] * vector2[i]".
1371
1372 Provided none of the two input vectors is the null vector, then the
1373 two vectors are orthogonal, i.e., have an angle of 90 degrees
1374 between them, exactly when their scalar product is zero, and
1375 vice-versa.
1376
1377 · "$vector_product = $vector1->vector_product($vector2);"
1378
1379 Returns the vector product of vector "$vector1" and vector "$vec‐
1380 tor2".
1381
1382 Both vectors must be column vectors (i.e., a matrix having several
1383 rows but only one column).
1384
1385 Currently, the vector product is only defined for 3 dimensions
1386 (i.e., vectors with 3 rows); all other vectors trigger an error
1387 message.
1388
1389 In 3 dimensions, the vector product of two vectors "x" and "y" is
1390 defined as
1391
1392 ⎪ x[1] y[1] e[1] ⎪
1393 determinant ⎪ x[2] y[2] e[2] ⎪
1394 ⎪ x[3] y[3] e[3] ⎪
1395
1396 where the ""x[i]"" and ""y[i]"" are the components of the two vec‐
1397 tors "x" and "y", respectively, and the ""e[i]"" are unity vectors
1398 (i.e., vectors with a length equal to one) with a one in row "i"
1399 and zero's elsewhere (this means that you have numbers and vectors
1400 as elements in this matrix!).
1401
1402 This determinant evaluates to the rather simple formula
1403
1404 z[1] = x[2] * y[3] - x[3] * y[2]
1405 z[2] = x[3] * y[1] - x[1] * y[3]
1406 z[3] = x[1] * y[2] - x[2] * y[1]
1407
1408 A characteristic property of the vector product is that the result‐
1409 ing vector is orthogonal to both of the input vectors (if neither
1410 of both is the null vector, otherwise this is trivial), i.e., the
1411 scalar product of each of the input vectors with the resulting vec‐
1412 tor is always zero.
1413
1414 · "$length = $vector->length();"
1415
1416 This is actually a shortcut for
1417
1418 $length = sqrt( $vector->scalar_product($vector) );
1419
1420 and returns the length of a given (column!) vector "$vector".
1421
1422 Note that the "length" calculated by this method is in fact the
1423 "two"-norm of a vector "$vector"!
1424
1425 The general definition for norms of vectors is the following:
1426
1427 sub vector_norm
1428 {
1429 croak "Usage: \$norm = \$vector->vector_norm(\$n);"
1430 if (@_ != 2);
1431
1432 my($vector,$n) = @_;
1433 my($rows,$cols) = ($vector->[1],$vector->[2]);
1434 my($k,$comp,$sum);
1435
1436 croak "Math::MatrixReal::vector_norm(): vector is not a column vector"
1437 unless ($cols == 1);
1438
1439 croak "Math::MatrixReal::vector_norm(): norm index must be > 0"
1440 unless ($n > 0);
1441
1442 croak "Math::MatrixReal::vector_norm(): norm index must be integer"
1443 unless ($n == int($n));
1444
1445 $sum = 0;
1446 for ( $k = 0; $k < $rows; $k++ )
1447 {
1448 $comp = abs( $vector->[0][$k][0] );
1449 $sum += $comp ** $n;
1450 }
1451 return( $sum ** (1 / $n) );
1452 }
1453
1454 Note that the case "n = 1" is the "one"-norm for matrices applied
1455 to a vector, the case "n = 2" is the euclidian norm or length of a
1456 vector, and if "n" goes to infinity, you have the "infinity"- or
1457 "maximum"-norm for matrices applied to a vector!
1458
1459 · "$xn_vector = $matrix->""solve_GSM($x0_vector,$b_vector,$epsilon);"
1460
1461 · "$xn_vector = $matrix->""solve_SSM($x0_vector,$b_vector,$epsilon);"
1462
1463 · "$xn_vector = $matrix->""solve_RM($x0_vector,$b_vec‐
1464 tor,$weight,$epsilon);"
1465
1466 In some cases it might not be practical or desirable to solve an
1467 equation system ""A * x = b"" using an analytical algorithm like
1468 the "decompose_LR()" and "solve_LR()" method pair.
1469
1470 In fact in some cases, due to the numerical properties (the "condi‐
1471 tion") of the matrix "A", the numerical error of the obtained
1472 result can be greater than by using an approximative (iterative)
1473 algorithm like one of the three implemented here.
1474
1475 All three methods, GSM ("Global Step Method" or "Gesamtschrittver‐
1476 fahren"), SSM ("Single Step Method" or "Einzelschrittverfahren")
1477 and RM ("Relaxation Method" or "Relaxationsverfahren"), are fix-
1478 point iterations, that is, can be described by an iteration func‐
1479 tion ""x(t+1) = Phi( x(t) )"" which has the property:
1480
1481 Phi(x) = x <==> A * x = b
1482
1483 We can define "Phi(x)" as follows:
1484
1485 Phi(x) := ( En - A ) * x + b
1486
1487 where "En" is a matrix of the same size as "A" ("n" rows and col‐
1488 umns) with one's on its main diagonal and zero's elsewhere.
1489
1490 This function has the required property.
1491
1492 Proof:
1493
1494 A * x = b
1495
1496 <==> -( A * x ) = -b
1497
1498 <==> -( A * x ) + x = -b + x
1499
1500 <==> -( A * x ) + x + b = x
1501
1502 <==> x - ( A * x ) + b = x
1503
1504 <==> ( En - A ) * x + b = x
1505
1506 This last step is true because
1507
1508 x[i] - ( a[i,1] x[1] + ... + a[i,i] x[i] + ... + a[i,n] x[n] ) + b[i]
1509
1510 is the same as
1511
1512 ( -a[i,1] x[1] + ... + (1 - a[i,i]) x[i] + ... + -a[i,n] x[n] ) + b[i]
1513
1514 qed
1515
1516 Note that actually solving the equation system ""A * x = b"" means
1517 to calculate
1518
1519 a[i,1] x[1] + ... + a[i,i] x[i] + ... + a[i,n] x[n] = b[i]
1520
1521 <==> a[i,i] x[i] =
1522 b[i]
1523 - ( a[i,1] x[1] + ... + a[i,i] x[i] + ... + a[i,n] x[n] )
1524 + a[i,i] x[i]
1525
1526 <==> x[i] =
1527 ( b[i]
1528 - ( a[i,1] x[1] + ... + a[i,i] x[i] + ... + a[i,n] x[n] )
1529 + a[i,i] x[i]
1530 ) / a[i,i]
1531
1532 <==> x[i] =
1533 ( b[i] -
1534 ( a[i,1] x[1] + ... + a[i,i-1] x[i-1] +
1535 a[i,i+1] x[i+1] + ... + a[i,n] x[n] )
1536 ) / a[i,i]
1537
1538 There is one major restriction, though: a fix-point iteration is
1539 guaranteed to converge only if the first derivative of the itera‐
1540 tion function has an absolute value less than one in an area around
1541 the point "x(*)" for which ""Phi( x(*) ) = x(*)"" is to be true,
1542 and if the start vector "x(0)" lies within that area!
1543
1544 This is best verified grafically, which unfortunately is impossible
1545 to do in this textual documentation!
1546
1547 See literature on Numerical Analysis for details!
1548
1549 In our case, this restriction translates to the following three
1550 conditions:
1551
1552 There must exist a norm so that the norm of the matrix of the iter‐
1553 ation function, "( En - A )", has a value less than one, the matrix
1554 "A" may not have any zero value on its main diagonal and the ini‐
1555 tial vector "x(0)" must be "good enough", i.e., "close enough" to
1556 the solution "x(*)".
1557
1558 (Remember school math: the first derivative of a straight line
1559 given by ""y = a * x + b"" is "a"!)
1560
1561 The three methods expect a (quadratic!) matrix "$matrix" as their
1562 first argument, a start vector "$x0_vector", a vector "$b_vector"
1563 (which is the vector "b" in your equation system ""A * x = b""), in
1564 the case of the "Relaxation Method" ("RM"), a real number "$weight"
1565 best between zero and two, and finally an error limit (real number)
1566 "$epsilon".
1567
1568 (Note that the weight "$weight" used by the "Relaxation Method"
1569 ("RM") is NOT checked to lie within any reasonable range!)
1570
1571 The three methods first test the first two conditions of the three
1572 conditions listed above and return an empty list if these condi‐
1573 tions are not fulfilled.
1574
1575 Therefore, you should always test their return value using some
1576 code like:
1577
1578 if ( $xn_vector = $A_matrix->solve_GSM($x0_vector,$b_vector,1E-12) )
1579 {
1580 # do something with the solution...
1581 }
1582 else
1583 {
1584 # do something with the fact that there is no solution...
1585 }
1586
1587 Otherwise, they iterate until "abs( Phi(x) - x ) < epsilon".
1588
1589 (Beware that theoretically, infinite loops might result if the
1590 starting vector is too far "off" the solution! In practice, this
1591 shouldn't be a problem. Anyway, you can always press <ctrl-C> if
1592 you think that the iteration takes too long!)
1593
1594 The difference between the three methods is the following:
1595
1596 In the "Global Step Method" ("GSM"), the new vector ""x(t+1)""
1597 (called "y" here) is calculated from the vector "x(t)" (called "x"
1598 here) according to the formula:
1599
1600 y[i] =
1601 ( b[i]
1602 - ( a[i,1] x[1] + ... + a[i,i-1] x[i-1] +
1603 a[i,i+1] x[i+1] + ... + a[i,n] x[n] )
1604 ) / a[i,i]
1605
1606 In the "Single Step Method" ("SSM"), the components of the vector
1607 ""x(t+1)"" which have already been calculated are used to calculate
1608 the remaining components, i.e.
1609
1610 y[i] =
1611 ( b[i]
1612 - ( a[i,1] y[1] + ... + a[i,i-1] y[i-1] + # note the "y[]"!
1613 a[i,i+1] x[i+1] + ... + a[i,n] x[n] ) # note the "x[]"!
1614 ) / a[i,i]
1615
1616 In the "Relaxation method" ("RM"), the components of the vector
1617 ""x(t+1)"" are calculated by "mixing" old and new value (like cold
1618 and hot water), and the weight "$weight" determines the "aperture"
1619 of both the "hot water tap" as well as of the "cold water tap",
1620 according to the formula:
1621
1622 y[i] =
1623 ( b[i]
1624 - ( a[i,1] y[1] + ... + a[i,i-1] y[i-1] + # note the "y[]"!
1625 a[i,i+1] x[i+1] + ... + a[i,n] x[n] ) # note the "x[]"!
1626 ) / a[i,i]
1627 y[i] = weight * y[i] + (1 - weight) * x[i]
1628
1629 Note that the weight "$weight" should be greater than zero and less
1630 than two (!).
1631
1632 The three methods are supposed to be of different efficiency.
1633 Experiment!
1634
1635 Remember that in most cases, it is probably advantageous to first
1636 "normalize()" your equation system prior to solving it!
1637
1639 SYNOPSIS
1640
1641 · Unary operators:
1642
1643 ""-"", ""~"", ""abs"", "test", ""!"", '""'
1644
1645 · Binary operators:
1646
1647 "".""
1648
1649 Binary (arithmetic) operators:
1650
1651 ""+"", ""-"", ""*"", ""**"", ""+="", ""-="", ""*="",
1652 ""/="",""**=""
1653
1654 · Binary (relational) operators:
1655
1656 ""=="", ""!="", ""<"", ""<="", "">"", "">=""
1657
1658 ""eq"", ""ne"", ""lt"", ""le"", ""gt"", ""ge""
1659
1660 Note that the latter (""eq"", ""ne"", ... ) are just synonyms of
1661 the former (""=="", ""!="", ... ), defined for convenience only.
1662
1663 DESCRIPTION
1664
1665 '.' Concatenation
1666
1667 Returns the two matrices concatenated side by side.
1668
1669 Example: $c = $a . $b;
1670
1671 For example, if
1672
1673 $a=[ 1 2 ] $b=[ 5 6 ]
1674 [ 3 4 ] [ 7 8 ]
1675 then
1676
1677 $c=[ 1 2 5 6 ]
1678 [ 3 4 7 8 ]
1679
1680 Note that only matrices with the same number of rows may be
1681 concatenated.
1682
1683 '-' Unary minus
1684
1685 Returns the negative of the given matrix, i.e., the matrix
1686 with all elements multiplied with the factor "-1".
1687
1688 Example:
1689
1690 $matrix = -$matrix;
1691
1692 '~' Transposition
1693
1694 Returns the transposed of the given matrix.
1695
1696 Examples:
1697
1698 $temp = ~$vector * $vector;
1699 $length = sqrt( $temp->element(1,1) );
1700
1701 if (~$matrix == $matrix) { # matrix is symmetric ... }
1702
1703 abs Norm
1704
1705 Returns the "one"-Norm of the given matrix.
1706
1707 Example:
1708
1709 $error = abs( $A * $x - $b );
1710
1711 test Boolean test
1712
1713 Tests wether there is at least one non-zero element in the
1714 matrix.
1715
1716 Example:
1717
1718 if ($xn_vector) { # result of iteration is not zero ... }
1719
1720 '!' Negated boolean test
1721
1722 Tests wether the matrix contains only zero's.
1723
1724 Examples:
1725
1726 if (! $b_vector) { # heterogenous equation system ... }
1727 else { # homogenous equation system ... }
1728
1729 unless ($x_vector) { # not the null-vector! }
1730
1731 '""""'
1732 "Stringify" operator
1733
1734 Converts the given matrix into a string.
1735
1736 Uses scientific representation to keep precision loss to a
1737 minimum in case you want to read this string back in again
1738 later with "new_from_string()".
1739
1740 Uses a 13-digit mantissa and a 20-character field for each
1741 element so that lines will wrap nicely on an 80-column screen.
1742
1743 Examples:
1744
1745 $matrix = Math::MatrixReal->new_from_string(<<"MATRIX");
1746 [ 1 0 ]
1747 [ 0 -1 ]
1748 MATRIX
1749 print "$matrix";
1750
1751 [ 1.000000000000E+00 0.000000000000E+00 ]
1752 [ 0.000000000000E+00 -1.000000000000E+00 ]
1753
1754 $string = "$matrix";
1755 $test = Math::MatrixReal->new_from_string($string);
1756 if ($test == $matrix) { print ":-)\n"; } else { print ":-(\n"; }
1757
1758 '+' Addition
1759
1760 Returns the sum of the two given matrices.
1761
1762 Examples:
1763
1764 $matrix_S = $matrix_A + $matrix_B;
1765
1766 $matrix_A += $matrix_B;
1767
1768 '-' Subtraction
1769
1770 Returns the difference of the two given matrices.
1771
1772 Examples:
1773
1774 $matrix_D = $matrix_A - $matrix_B;
1775
1776 $matrix_A -= $matrix_B;
1777
1778 Note that this is the same as:
1779
1780 $matrix_S = $matrix_A + -$matrix_B;
1781
1782 $matrix_A += -$matrix_B;
1783
1784 (The latter are less efficient, though)
1785
1786 '*' Multiplication
1787
1788 Returns the matrix product of the two given matrices or the
1789 product of the given matrix and scalar factor.
1790
1791 Examples:
1792
1793 $matrix_P = $matrix_A * $matrix_B;
1794
1795 $matrix_A *= $matrix_B;
1796
1797 $vector_b = $matrix_A * $vector_x;
1798
1799 $matrix_B = -1 * $matrix_A;
1800
1801 $matrix_B = $matrix_A * -1;
1802
1803 $matrix_A *= -1;
1804
1805 '/' Division
1806
1807 Currently a shortcut for doing $a * $b ** -1 is $a / $b, which
1808 works for square matrices. One can also use 1/$a .
1809
1810 '**' Exponentiation
1811
1812 Returns the matrix raised to an integer power. If 0 is passed,
1813 the identity matrix is returned. If a negative integer is
1814 passed, it computes the inverse (if it exists) and then raised
1815 the inverse to the absolute value of the integer. The matrix
1816 must be quadratic.
1817
1818 Examples:
1819
1820 $matrix2 = $matrix ** 2;
1821
1822 $matrix **= 2;
1823
1824 $inv2 = $matrix ** -2;
1825
1826 $ident = $matrix ** 0;
1827
1828 '==' Equality
1829
1830 Tests two matrices for equality.
1831
1832 Example:
1833
1834 if ( $A * $x == $b ) { print "EUREKA!\n"; }
1835
1836 Note that in most cases, due to numerical errors (due to the
1837 finite precision of computer arithmetics), it is a bad idea to
1838 compare two matrices or vectors this way.
1839
1840 Better use the norm of the difference of the two matrices you
1841 want to compare and compare that norm with a small number,
1842 like this:
1843
1844 if ( abs( $A * $x - $b ) < 1E-12 ) { print "BINGO!\n"; }
1845
1846 '!=' Inequality
1847
1848 Tests two matrices for inequality.
1849
1850 Example:
1851
1852 while ($x0_vector != $xn_vector) { # proceed with iteration ... }
1853
1854 (Stops when the iteration becomes stationary)
1855
1856 Note that (just like with the '==' operator), it is usually a
1857 bad idea to compare matrices or vectors this way. Compare the
1858 norm of the difference of the two matrices with a small number
1859 instead.
1860
1861 '<' Less than
1862
1863 Examples:
1864
1865 if ( $matrix1 < $matrix2 ) { # ... }
1866
1867 if ( $vector < $epsilon ) { # ... }
1868
1869 if ( 1E-12 < $vector ) { # ... }
1870
1871 if ( $A * $x - $b < 1E-12 ) { # ... }
1872
1873 These are just shortcuts for saying:
1874
1875 if ( abs($matrix1) < abs($matrix2) ) { # ... }
1876
1877 if ( abs($vector) < abs($epsilon) ) { # ... }
1878
1879 if ( abs(1E-12) < abs($vector) ) { # ... }
1880
1881 if ( abs( $A * $x - $b ) < abs(1E-12) ) { # ... }
1882
1883 Uses the "one"-norm for matrices and Perl's built-in "abs()"
1884 for scalars.
1885
1886 '<=' Less than or equal
1887
1888 As with the '<' operator, this is just a shortcut for the same
1889 expression with "abs()" around all arguments.
1890
1891 Example:
1892
1893 if ( $A * $x - $b <= 1E-12 ) { # ... }
1894
1895 which in fact is the same as:
1896
1897 if ( abs( $A * $x - $b ) <= abs(1E-12) ) { # ... }
1898
1899 Uses the "one"-norm for matrices and Perl's built-in "abs()"
1900 for scalars.
1901
1902 '>' Greater than
1903
1904 As with the '<' and '<=' operator, this
1905
1906 if ( $xn - $x0 > 1E-12 ) { # ... }
1907
1908 is just a shortcut for:
1909
1910 if ( abs( $xn - $x0 ) > abs(1E-12) ) { # ... }
1911
1912 Uses the "one"-norm for matrices and Perl's built-in "abs()"
1913 for scalars.
1914
1915 '>=' Greater than or equal
1916
1917 As with the '<', '<=' and '>' operator, the following
1918
1919 if ( $LR >= $A ) { # ... }
1920
1921 is simply a shortcut for:
1922
1923 if ( abs($LR) >= abs($A) ) { # ... }
1924
1925 Uses the "one"-norm for matrices and Perl's built-in "abs()"
1926 for scalars.
1927
1929 Math::VectorReal, Math::PARI, Math::MatrixBool, DFA::Kleene,
1930 Math::Kleene, Set::IntegerRange, Set::IntegerFast .
1931
1933 This man page documents Math::MatrixReal version 2.02.
1934
1935 The latest version can always be found at
1936 http://leto.net/code/Math-MatrixReal/
1937
1939 Steffen Beyer <sb@engelschall.com>, Rodolphe Ortalo <ortalo@laas.fr>,
1940 Jonathan Leto <jonathan@leto.net>.
1941
1942 Currently maintained by Jonathan Leto, send all bugs/patches to me.
1943
1945 Many thanks to Prof. Pahlings for stoking the fire of my enthusiasm for
1946 Algebra and Linear Algebra at the university (RWTH Aachen, Germany),
1947 and to Prof. Esser and his assistant, Mr. Jarausch, for their fascinat‐
1948 ing lectures in Numerical Analysis!
1949
1951 Copyright (c) 1996-2002 by Steffen Beyer, Rodolphe Ortalo, Jonathan
1952 Leto. All rights reserved.
1953
1955 This package is free software; you can redistribute it and/or modify it
1956 under the same terms as Perl itself.
1957
1958
1959
1960perl v5.8.8 2008-02-22 Math::MatrixReal(3)