1Math::MatrixReal(3) User Contributed Perl Documentation Math::MatrixReal(3)
2
3
4
6 Math::MatrixReal - Matrix of Reals
7
8 Implements the data type "matrix of real numbers" (and consequently
9 also "vector of real numbers").
10
12 my $a = Math::MatrixReal->new_random(5, 5);
13
14 my $b = $a->new_random(10, 30, { symmetric=>1, bounded_by=>[-1,1] });
15
16 my $c = $b * $a ** 3;
17
18 my $d = $b->new_from_rows( [ [ 5, 3 ,4], [3, 4, 5], [ 2, 4, 1 ] ] );
19
20 print $a;
21
22 my $row = ($a * $b)->row(3);
23
24 my $col = (5*$c)->col(2);
25
26 my $transpose = ~$c;
27
28 my $transpose = $c->transpose;
29
30 my $inverse = $a->inverse;
31
32 my $inverse = 1/$a;
33
34 my $inverse = $a ** -1;
35
36 my $determinant= $a->det;
37
38 · $matrix->display_precision($integer)
39
40 Sets the default precision when matrices are printed or
41 stringified. $matrix->display_precision(0) will only show the
42 integer part of all the entries of $matrix and
43 $matrix->display_precision() will return to the default scientific
44 display notation. This method does not effect the precision of the
45 calculations.
46
48 Constructor Methods And Such
49 · use Math::MatrixReal;
50
51 Makes the methods and overloaded operators of this module available
52 to your program.
53
54 · $new_matrix = new Math::MatrixReal($rows,$columns);
55
56 The matrix object constructor method. A new matrix of size $rows by
57 $columns will be created, with the value 0.0 for all elements.
58
59 Note that this method is implicitly called by many of the other
60 methods in this module.
61
62 · $new_matrix = $some_matrix->new($rows,$columns);
63
64 Another way of calling the matrix object constructor method.
65
66 Matrix $some_matrix is not changed by this in any way.
67
68 · $new_matrix = $matrix->new_from_cols( [
69 $column_vector|$array_ref|$string, ... ] )
70
71 Creates a new matrix given a reference to an array of any of the
72 following:
73
74 · column vectors ( n by 1 Math::MatrixReal matrices )
75
76 · references to arrays
77
78 · strings properly formatted to create a column with
79 Math::MatrixReal's new_from_string command
80
81 You may mix and match these as you wish. However, all must be of
82 the same dimension--no padding happens automatically. Example:
83
84 my $matrix = Math::MatrixReal->new_from_cols( [ [1,2], [3,4] ] );
85 print $matrix;
86
87 will print
88
89 [ 1.000000000000E+00 3.000000000000E+00 ]
90 [ 2.000000000000E+00 4.000000000000E+00 ]
91
92 · new_from_rows( [ $row_vector|$array_ref|$string, ... ] )
93
94 Creates a new matrix given a reference to an array of any of the
95 following:
96
97 · row vectors ( 1 by n Math::MatrixReal matrices )
98
99 · references to arrays
100
101 · strings properly formatted to create a row with
102 Math::MatrixReal's new_from_string command
103
104 You may mix and match these as you wish. However, all must be of
105 the same dimension--no padding happens automatically. Example:
106
107 my $matrix = Math::MatrixReal->new_from_rows( [ [1,2], [3,4] ] );
108 print $matrix;
109
110 will print
111
112 [ 1.000000000000E+00 2.000000000000E+00 ]
113 [ 3.000000000000E+00 4.000000000000E+00 ]
114
115 · $new_matrix = Math::MatrixReal->new_random($rows, $cols, %options
116 );
117
118 This method allows you to create a random matrix with various
119 properties controlled by the %options matrix, which is optional.
120 The default values of the %options matrix are { integer => 0,
121 symmetric => 0, tridiagonal => 0, diagonal => 0, bounded_by =>
122 [0,10] } .
123
124 Example:
125
126 $matrix = Math::MatrixReal->new_random(4, { diagonal => 1, integer => 1 } );
127 print $matrix;
128
129 will print a 4x4 random diagonal matrix with integer entries
130 between zero and ten, something like
131
132 [ 5.000000000000E+00 0.000000000000E+00 0.000000000000E+00 0.000000000000E+00 ]
133 [ 0.000000000000E+00 2.000000000000E+00 0.000000000000E+00 0.000000000000E+00 ]
134 [ 0.000000000000E+00 0.000000000000E+00 1.000000000000E+00 0.000000000000E+00 ]
135 [ 0.000000000000E+00 0.000000000000E+00 0.000000000000E+00 8.000000000000E+00 ]
136
137 · $new_matrix = Math::MatrixReal->new_diag( $array_ref );
138
139 This method allows you to create a diagonal matrix by only
140 specifying the diagonal elements. Example:
141
142 $matrix = Math::MatrixReal->new_diag( [ 1,2,3,4 ] );
143 print $matrix;
144
145 will print
146
147 [ 1.000000000000E+00 0.000000000000E+00 0.000000000000E+00 0.000000000000E+00 ]
148 [ 0.000000000000E+00 2.000000000000E+00 0.000000000000E+00 0.000000000000E+00 ]
149 [ 0.000000000000E+00 0.000000000000E+00 3.000000000000E+00 0.000000000000E+00 ]
150 [ 0.000000000000E+00 0.000000000000E+00 0.000000000000E+00 4.000000000000E+00 ]
151
152 · $new_matrix = Math::MatrixReal->new_tridiag( $lower, $diag, $upper
153 );
154
155 This method allows you to create a tridiagonal matrix by only
156 specifying the lower diagonal, diagonal and upper diagonal,
157 respectively.
158
159 $matrix = Math::MatrixReal->new_tridiag( [ 6, 4, 2 ], [1,2,3,4], [1, 8, 9] );
160 print $matrix;
161
162 will print
163
164 [ 1.000000000000E+00 1.000000000000E+00 0.000000000000E+00 0.000000000000E+00 ]
165 [ 6.000000000000E+00 2.000000000000E+00 8.000000000000E+00 0.000000000000E+00 ]
166 [ 0.000000000000E+00 4.000000000000E+00 3.000000000000E+00 9.000000000000E+00 ]
167 [ 0.000000000000E+00 0.000000000000E+00 2.000000000000E+00 4.000000000000E+00 ]
168
169 · $new_matrix = Math::MatrixReal->new_from_string($string);
170
171 This method allows you to read in a matrix from a string (for
172 instance, from the keyboard, from a file or from your code).
173
174 The syntax is simple: each row must start with ""[ "" and end with
175 "" ]\n"" (""\n"" being the newline character and "" "" a space or
176 tab) and contain one or more numbers, all separated from each other
177 by spaces or tabs.
178
179 Additional spaces or tabs can be added at will, but no comments.
180
181 Examples:
182
183 $string = "[ 1 2 3 ]\n[ 2 2 -1 ]\n[ 1 1 1 ]\n";
184 $matrix = Math::MatrixReal->new_from_string($string);
185 print "$matrix";
186
187 By the way, this prints
188
189 [ 1.000000000000E+00 2.000000000000E+00 3.000000000000E+00 ]
190 [ 2.000000000000E+00 2.000000000000E+00 -1.000000000000E+00 ]
191 [ 1.000000000000E+00 1.000000000000E+00 1.000000000000E+00 ]
192
193 But you can also do this in a much more comfortable way using the
194 shell-like "here-document" syntax:
195
196 $matrix = Math::MatrixReal->new_from_string(<<'MATRIX');
197 [ 1 0 0 0 0 0 1 ]
198 [ 0 1 0 0 0 0 0 ]
199 [ 0 0 1 0 0 0 0 ]
200 [ 0 0 0 1 0 0 0 ]
201 [ 0 0 0 0 1 0 0 ]
202 [ 0 0 0 0 0 1 0 ]
203 [ 1 0 0 0 0 0 -1 ]
204 MATRIX
205
206 You can even use variables in the matrix:
207
208 $c1 = 2 / 3;
209 $c2 = -2 / 5;
210 $c3 = 26 / 9;
211
212 $matrix = Math::MatrixReal->new_from_string(<<"MATRIX");
213
214 [ 3 2 0 ]
215 [ 0 3 2 ]
216 [ $c1 $c2 $c3 ]
217
218 MATRIX
219
220 (Remember that you may use spaces and tabs to format the matrix to
221 your taste)
222
223 Note that this method uses exactly the same representation for a
224 matrix as the "stringify" operator "": this means that you can
225 convert any matrix into a string with "$string = "$matrix";" and
226 read it back in later (for instance from a file!).
227
228 Note however that you may suffer a precision loss in this process
229 because only 13 digits are supported in the mantissa when printed!!
230
231 If the string you supply (or someone else supplies) does not obey
232 the syntax mentioned above, an exception is raised, which can be
233 caught by "eval" as follows:
234
235 print "Please enter your matrix (in one line): ";
236 $string = <STDIN>;
237 $string =~ s/\\n/\n/g;
238 eval { $matrix = Math::MatrixReal->new_from_string($string); };
239 if ($@)
240 {
241 print "$@";
242 # ...
243 # (error handling)
244 }
245 else
246 {
247 # continue...
248 }
249
250 or as follows:
251
252 eval { $matrix = Math::MatrixReal->new_from_string(<<"MATRIX"); };
253 [ 3 2 0 ]
254 [ 0 3 2 ]
255 [ $c1 $c2 $c3 ]
256 MATRIX
257 if ($@)
258 # ...
259
260 Actually, the method shown above for reading a matrix from the
261 keyboard is a little awkward, since you have to enter a lot of
262 "\n"'s for the newlines.
263
264 A better way is shown in this piece of code:
265
266 while (1)
267 {
268 print "\nPlease enter your matrix ";
269 print "(multiple lines, <ctrl-D> = done):\n";
270 eval { $new_matrix =
271 Math::MatrixReal->new_from_string(join('',<STDIN>)); };
272 if ($@)
273 {
274 $@ =~ s/\s+at\b.*?$//;
275 print "${@}Please try again.\n";
276 }
277 else { last; }
278 }
279
280 Possible error messages of the "new_from_string()" method are:
281
282 Math::MatrixReal::new_from_string(): syntax error in input string
283 Math::MatrixReal::new_from_string(): empty input string
284
285 If the input string has rows with varying numbers of columns, the
286 following warning will be printed to STDERR:
287
288 Math::MatrixReal::new_from_string(): missing elements will be set to zero!
289
290 If everything is okay, the method returns an object reference to
291 the (newly allocated) matrix containing the elements you specified.
292
293 · $new_matrix = $some_matrix->shadow();
294
295 Returns an object reference to a NEW but EMPTY matrix (filled with
296 zero's) of the SAME SIZE as matrix "$some_matrix".
297
298 Matrix "$some_matrix" is not changed by this in any way.
299
300 · $matrix1->copy($matrix2);
301
302 Copies the contents of matrix "$matrix2" to an ALREADY EXISTING
303 matrix "$matrix1" (which must have the same size as matrix
304 "$matrix2"!).
305
306 Matrix "$matrix2" is not changed by this in any way.
307
308 · $twin_matrix = $some_matrix->clone();
309
310 Returns an object reference to a NEW matrix of the SAME SIZE as
311 matrix "$some_matrix". The contents of matrix "$some_matrix" have
312 ALREADY BEEN COPIED to the new matrix "$twin_matrix". This is the
313 method that the operator "=" is overloaded to when you type "$a =
314 $b", when $a and $b are matrices.
315
316 Matrix "$some_matrix" is not changed by this in any way.
317
318 Matrix Row, Column and Element operations
319 · $row_vector = $matrix->row($row);
320
321 This is a projection method which returns an object reference to a
322 NEW matrix (which in fact is a (row) vector since it has only one
323 row) to which row number "$row" of matrix "$matrix" has already
324 been copied.
325
326 Matrix "$matrix" is not changed by this in any way.
327
328 · $column_vector = $matrix->column($column);
329
330 This is a projection method which returns an object reference to a
331 NEW matrix (which in fact is a (column) vector since it has only
332 one column) to which column number "$column" of matrix "$matrix"
333 has already been copied.
334
335 Matrix "$matrix" is not changed by this in any way.
336
337 · $matrix->assign($row,$column,$value);
338
339 Explicitly assigns a value "$value" to a single element of the
340 matrix "$matrix", located in row "$row" and column "$column",
341 thereby replacing the value previously stored there.
342
343 · $value = $matrix->element($row,$column);
344
345 Returns the value of a specific element of the matrix "$matrix",
346 located in row "$row" and column "$column".
347
348 · $new_matrix = $matrix->each( \&function );
349
350 Creates a new matrix by evaluating a code reference on each element
351 of the given matrix. The function is passed the element, the row
352 index and the column index, in that order. The value the function
353 returns ( or the value of the last executed statement ) is the
354 value given to the corresponding element in $new_matrix.
355
356 Example:
357
358 # add 1 to every element in the matrix
359 $matrix = $matrix->each ( sub { (shift) + 1 } );
360
361 Example:
362
363 my $cofactor = $matrix->each( sub { my(undef,$i,$j) = @_;
364 ($i+$j) % 2 == 0 ? $matrix->minor($i,$j)->det()
365 : -1*$matrix->minor($i,$j)->det();
366 } );
367
368 This code needs some explanation. For each element of $matrix, it
369 throws away the actual value and stores the row and column indexes
370 in $i and $j. Then it sets element [$i,$j] in $cofactor to the
371 determinant of "$matrix->minor($i,$j)" if it is an "even" element,
372 or "-1*$matrix->minor($i,$j)" if it is an "odd" element.
373
374 · $new_matrix = $matrix->each_diag( \&function );
375
376 Creates a new matrix by evaluating a code reference on each
377 diagonal element of the given matrix. The function is passed the
378 element, the row index and the column index, in that order. The
379 value the function returns ( or the value of the last executed
380 statement ) is the value given to the corresponding element in
381 $new_matrix.
382
383 · $matrix->swap_col( $col1, $col2 );
384
385 This method takes two one-based column numbers and swaps the values
386 of each element in each column. "$matrix->swap_col(2,3)" would
387 replace column 2 in $matrix with column 3, and replace column 3
388 with column 2.
389
390 · $matrix->swap_row( $row1, $row2 );
391
392 This method takes two one-based row numbers and swaps the values of
393 each element in each row. "$matrix->swap_row(2,3)" would replace
394 row 2 in $matrix with row 3, and replace row 3 with row 2.
395
396 · $matrix->assign_row( $row_number , $new_row_vector );
397
398 This method takes a one-based row number and assigns row
399 $row_number of $matrix with $new_row_vector and returns the
400 resulting matrix. "$matrix->assign_row(5, $x)" would replace row 2
401 in $matrix with the row vector $x.
402
403 Matrix Operations
404 · $det = $matrix->det();
405
406 Returns the determinant of the matrix, without going through the
407 rigamarole of computing a LR decomposition. This method should be
408 much faster than LR decomposition if the matrix is diagonal or
409 triangular. Otherwise, it is just a wrapper for
410 "$matrix->decompose_LR->det_LR". If the determinant is zero, there
411 is no inverse and vice-versa. Only quadratic matrices have
412 determinants.
413
414 · "$inverse = $matrix->inverse();"
415
416 Returns the inverse of a matrix, without going through the
417 rigamarole of computing a LR decomposition. If no inverse exists,
418 undef is returned and an error is printed via "carp()". This is
419 nothing but a wrapper for "$matrix->decompose_LR->invert_LR".
420
421 · "($rows,$columns) = $matrix->dim();"
422
423 Returns a list of two items, representing the number of rows and
424 columns the given matrix "$matrix" contains.
425
426 · "$norm_one = $matrix->norm_one();"
427
428 Returns the "one"-norm of the given matrix "$matrix".
429
430 The "one"-norm is defined as follows:
431
432 For each column, the sum of the absolute values of the elements in
433 the different rows of that column is calculated. Finally, the
434 maximum of these sums is returned.
435
436 Note that the "one"-norm and the "maximum"-norm are mathematically
437 equivalent, although for the same matrix they usually yield a
438 different value.
439
440 Therefore, you should only compare values that have been calculated
441 using the same norm!
442
443 Throughout this package, the "one"-norm is (arbitrarily) used for
444 all comparisons, for the sake of uniformity and comparability,
445 except for the iterative methods "solve_GSM()", "solve_SSM()" and
446 "solve_RM()" which use either norm depending on the matrix itself.
447
448 · "$norm_max = $matrix->norm_max();"
449
450 Returns the "maximum"-norm of the given matrix $matrix.
451
452 The "maximum"-norm is defined as follows:
453
454 For each row, the sum of the absolute values of the elements in the
455 different columns of that row is calculated. Finally, the maximum
456 of these sums is returned.
457
458 Note that the "maximum"-norm and the "one"-norm are mathematically
459 equivalent, although for the same matrix they usually yield a
460 different value.
461
462 Therefore, you should only compare values that have been calculated
463 using the same norm!
464
465 Throughout this package, the "one"-norm is (arbitrarily) used for
466 all comparisons, for the sake of uniformity and comparability,
467 except for the iterative methods "solve_GSM()", "solve_SSM()" and
468 "solve_RM()" which use either norm depending on the matrix itself.
469
470 · "$norm_sum = $matrix->norm_sum();"
471
472 This is a very simple norm which is defined as the sum of the
473 absolute values of every element.
474
475 · $p_norm = $matrix->norm_p($n);>
476
477 This function returns the "p-norm" of a vector. The argument $n
478 must be a number greater than or equal to 1 or the string "Inf".
479 The p-norm is defined as (sum(x_i^p))^(1/p). In words, it raised
480 each element to the p-th power, adds them up, and then takes the
481 p-th root of that number. If the string "Inf" is passed, the
482 "infinity-norm" is computed, which is really the limit of the
483 p-norm as p goes to infinity. It is defined as the maximum element
484 of the vector. Also, note that the familiar Euclidean distance
485 between two vectors is just a special case of a p-norm, when p is
486 equal to 2.
487
488 Example:
489 $a = Math::MatrixReal->new_from_cols([[1,2,3]]);
490 $p1 = $a->norm_p(1);
491 $p2 = $a->norm_p(2);
492 $p3 = $a->norm_p(3);
493 $pinf = $a->norm_p("Inf");
494
495 print "(1,2,3,Inf) norm:\n$p1\n$p2\n$p3\n$pinf\n";
496
497 $i1 = $a->new_from_rows([[1,0]]);
498 $i2 = $a->new_from_rows([[0,1]]);
499
500 # this should be sqrt(2) since it is the same as the
501 # hypotenuse of a 1 by 1 right triangle
502
503 $dist = ($i1-$i2)->norm_p(2);
504 print "Distance is $dist, which should be " . sqrt(2) . "\n";
505
506 Output:
507
508 (1,2,3,Inf) norm:
509 6
510 3.74165738677394139
511 3.30192724889462668
512 3
513
514 Distance is 1.41421356237309505, which should be 1.41421356237309505
515
516 · $frob_norm = "$matrix->norm_frobenius();"
517
518 This norm is similar to that of a p-norm where p is 2, except it
519 acts on a matrix, not a vector. Each element of the matrix is
520 squared, this is added up, and then a square root is taken.
521
522 · "$matrix->spectral_radius();"
523
524 Returns the maximum value of the absolute value of all eigenvalues.
525 Currently this computes all eigenvalues, then sifts through them to
526 find the largest in absolute value. Needless to say, this is very
527 inefficient, and in the future an algorithm that computes only the
528 largest eigenvalue may be implemented.
529
530 · "$matrix1->transpose($matrix2);"
531
532 Calculates the transposed matrix of matrix $matrix2 and stores the
533 result in matrix "$matrix1" (which must already exist and have the
534 same size as matrix "$matrix2"!).
535
536 This operation can also be carried out "in-place", i.e., input and
537 output matrix may be identical.
538
539 Transposition is a symmetry operation: imagine you rotate the
540 matrix along the axis of its main diagonal (going through elements
541 (1,1), (2,2), (3,3) and so on) by 180 degrees.
542
543 Another way of looking at it is to say that rows and columns are
544 swapped. In fact the contents of element "(i,j)" are swapped with
545 those of element "(j,i)".
546
547 Note that (especially for vectors) it makes a big difference if you
548 have a row vector, like this:
549
550 [ -1 0 1 ]
551
552 or a column vector, like this:
553
554 [ -1 ]
555 [ 0 ]
556 [ 1 ]
557
558 the one vector being the transposed of the other!
559
560 This is especially true for the matrix product of two vectors:
561
562 [ -1 ]
563 [ -1 0 1 ] * [ 0 ] = [ 2 ] , whereas
564 [ 1 ]
565
566 * [ -1 0 1 ]
567 [ -1 ] [ 1 0 -1 ]
568 [ 0 ] * [ -1 0 1 ] = [ -1 ] [ 1 0 -1 ] = [ 0 0 0 ]
569 [ 1 ] [ 0 ] [ 0 0 0 ] [ -1 0 1 ]
570 [ 1 ] [ -1 0 1 ]
571
572 So be careful about what you really mean!
573
574 Hint: throughout this module, whenever a vector is explicitly
575 required for input, a COLUMN vector is expected!
576
577 · "$trace = $matrix->trace();"
578
579 This returns the trace of the matrix, which is defined as the sum
580 of the diagonal elements. The matrix must be quadratic.
581
582 · "$minor = $matrix->minor($row,$col);"
583
584 Returns the minor matrix corresponding to $row and $col. $matrix
585 must be quadratic. If $matrix is n rows by n cols, the minor of
586 $row and $col will be an (n-1) by (n-1) matrix. The minor is
587 defined as crossing out the row and the col specified and returning
588 the remaining rows and columns as a matrix. This method is used by
589 "cofactor()".
590
591 · "$cofactor = $matrix->cofactor();"
592
593 The cofactor matrix is constructed as follows:
594
595 For each element, cross out the row and column that it sits in.
596 Now, take the determinant of the matrix that is left in the other
597 rows and columns. Multiply the determinant by (-1)^(i+j), where i
598 is the row index, and j is the column index. Replace the given
599 element with this value.
600
601 The cofactor matrix can be used to find the inverse of the matrix.
602 One formula for the inverse of a matrix is the cofactor matrix
603 transposed divided by the original determinant of the matrix.
604
605 The following two inverses should be exactly the same:
606
607 my $inverse1 = $matrix->inverse;
608 my $inverse2 = ~($matrix->cofactor)->each( sub { (shift)/$matrix->det() } );
609
610 Caveat: Although the cofactor matrix is simple algorithm to compute
611 the inverse of a matrix, and can be used with pencil and paper for
612 small matrices, it is comically slower than the native "inverse()"
613 function. Here is a small benchmark:
614
615 # $matrix1 is 15x15
616 $det = $matrix1->det;
617 timethese( 10,
618 {'inverse' => sub { $matrix1->inverse(); },
619 'cofactor' => sub { (~$matrix1->cofactor)->each ( sub { (shift)/$det; } ) }
620 } );
621
622
623 Benchmark: timing 10 iterations of LR, cofactor, inverse...
624 inverse: 1 wallclock secs ( 0.56 usr + 0.00 sys = 0.56 CPU) @ 17.86/s (n=10)
625 cofactor: 36 wallclock secs (36.62 usr + 0.01 sys = 36.63 CPU) @ 0.27/s (n=10)
626
627 · "$adjoint = $matrix->adjoint();"
628
629 The adjoint is just the transpose of the cofactor matrix. This
630 method is just an alias for " ~($matrix->cofactor)".
631
632 Arithmetic Operations
633 · "$matrix1->add($matrix2,$matrix3);"
634
635 Calculates the sum of matrix "$matrix2" and matrix "$matrix3" and
636 stores the result in matrix "$matrix1" (which must already exist
637 and have the same size as matrix "$matrix2" and matrix
638 "$matrix3"!).
639
640 This operation can also be carried out "in-place", i.e., the output
641 and one (or both) of the input matrices may be identical.
642
643 · "$matrix1->subtract($matrix2,$matrix3);"
644
645 Calculates the difference of matrix "$matrix2" minus matrix
646 "$matrix3" and stores the result in matrix "$matrix1" (which must
647 already exist and have the same size as matrix "$matrix2" and
648 matrix "$matrix3"!).
649
650 This operation can also be carried out "in-place", i.e., the output
651 and one (or both) of the input matrices may be identical.
652
653 Note that this operation is the same as
654 "$matrix1->add($matrix2,-$matrix3);", although the latter is a
655 little less efficient.
656
657 · "$matrix1->multiply_scalar($matrix2,$scalar);"
658
659 Calculates the product of matrix "$matrix2" and the number
660 "$scalar" (i.e., multiplies each element of matrix "$matrix2" with
661 the factor "$scalar") and stores the result in matrix "$matrix1"
662 (which must already exist and have the same size as matrix
663 "$matrix2"!).
664
665 This operation can also be carried out "in-place", i.e., input and
666 output matrix may be identical.
667
668 · "$product_matrix = $matrix1->multiply($matrix2);"
669
670 Calculates the product of matrix "$matrix1" and matrix "$matrix2"
671 and returns an object reference to a new matrix "$product_matrix"
672 in which the result of this operation has been stored.
673
674 Note that the dimensions of the two matrices "$matrix1" and
675 "$matrix2" (i.e., their numbers of rows and columns) must harmonize
676 in the following way (example):
677
678 [ 2 2 ]
679 [ 2 2 ]
680 [ 2 2 ]
681
682 [ 1 1 1 ] [ * * ]
683 [ 1 1 1 ] [ * * ]
684 [ 1 1 1 ] [ * * ]
685 [ 1 1 1 ] [ * * ]
686
687 I.e., the number of columns of matrix "$matrix1" has to be the same
688 as the number of rows of matrix "$matrix2".
689
690 The number of rows and columns of the resulting matrix
691 "$product_matrix" is determined by the number of rows of matrix
692 "$matrix1" and the number of columns of matrix "$matrix2",
693 respectively.
694
695 · "$matrix1->negate($matrix2);"
696
697 Calculates the negative of matrix "$matrix2" (i.e., multiplies all
698 elements with "-1") and stores the result in matrix "$matrix1"
699 (which must already exist and have the same size as matrix
700 "$matrix2"!).
701
702 This operation can also be carried out "in-place", i.e., input and
703 output matrix may be identical.
704
705 · "$matrix_to_power = $matrix1->exponent($integer);"
706
707 Raises the matrix to the $integer power. Obviously, $integer must
708 be an integer. If it is zero, the identity matrix is returned. If a
709 negative integer is given, the inverse will be computed (if it
710 exists) and then raised the the absolute value of $integer. The
711 matrix must be quadratic.
712
713 Boolean Matrix Operations
714 · "$matrix->is_quadratic();"
715
716 Returns a boolean value indicating if the given matrix is quadratic
717 (also know as "square" or "n by n"). A matrix is quadratic if it
718 has the same number of rows as it does columns.
719
720 · "$matrix->is_square();"
721
722 This is an alias for "is_quadratic()".
723
724 · "$matrix->is_symmetric();"
725
726 Returns a boolean value indicating if the given matrix is
727 symmetric. By definition, a matrix is symmetric if and only if
728 (M[i,j]=M[j,i]). This is equivalent to "($matrix == ~$matrix)" but
729 without memory allocation. Only quadratic matrices can be
730 symmetric.
731
732 Notes: A symmetric matrix always has real eigenvalues/eigenvectors.
733 A matrix plus its transpose is always symmetric.
734
735 · "$matrix->is_skew_symmetric();"
736
737 Returns a boolean value indicating if the given matrix is skew
738 symmetric. By definition, a matrix is symmetric if and only if
739 (M[i,j]=-M[j,i]). This is equivalent to "($matrix == -(~$matrix))"
740 but without memory allocation. Only quadratic matrices can be skew
741 symmetric.
742
743 · "$matrix->is_diagonal();"
744
745 Returns a boolean value indicating if the given matrix is diagonal,
746 i.e. all of the nonzero elements are on the main diagonal. Only
747 quadratic matrices can be diagonal.
748
749 · "$matrix->is_tridiagonal();"
750
751 Returns a boolean value indicating if the given matrix is
752 tridiagonal, i.e. all of the nonzero elements are on the main
753 diagonal or the diagonals above and below the main diagonal. Only
754 quadratic matrices can be tridiagonal.
755
756 · "$matrix->is_upper_triangular();"
757
758 Returns a boolean value indicating if the given matrix is upper
759 triangular, i.e. all of the nonzero elements not on the main
760 diagonal are above it. Only quadratic matrices can be upper
761 triangular. Note: diagonal matrices are both upper and lower
762 triangular.
763
764 · "$matrix->is_lower_triangular();"
765
766 Returns a boolean value indicating if the given matrix is lower
767 triangular, i.e. all of the nonzero elements not on the main
768 diagonal are below it. Only quadratic matrices can be lower
769 triangular. Note: diagonal matrices are both upper and lower
770 triangular.
771
772 · "$matrix->is_orthogonal();"
773
774 Returns a boolean value indicating if the given matrix is
775 orthogonal. An orthogonal matrix is has the property that the
776 transpose equals the inverse of the matrix. Instead of computing
777 each and comparing them, this method multiplies the matrix by it's
778 transpose, and returns true if this turns out to be the identity
779 matrix, false otherwise. Only quadratic matrices can orthogonal.
780
781 · "$matrix->is_binary();"
782
783 Returns a boolean value indicating if the given matrix is binary.
784 A matrix is binary if it contains only zeroes or ones.
785
786 · "$matrix->is_gramian();"
787
788 Returns a boolean value indicating if the give matrix is Gramian.
789 A matrix $A is Gramian if and only if there exists a square matrix
790 $B such that "$A = ~$B*$B". This is equivalent to checking if $A is
791 symmetric and has all nonnegative eigenvalues, which is what
792 Math::MatrixReal uses to check for this property.
793
794 · "$matrix->is_LR();"
795
796 Returns a boolean value indicating if the matrix is an LR
797 decomposition matrix.
798
799 · "$matrix->is_positive();"
800
801 Returns a boolean value indicating if the matrix contains only
802 positive entries. Note that a zero entry is not positive and will
803 cause "is_positive()" to return false.
804
805 · "$matrix->is_negative();"
806
807 Returns a boolean value indicating if the matrix contains only
808 negative entries. Note that a zero entry is not negative and will
809 cause "is_negative()" to return false.
810
811 · "$matrix->is_periodic($k);"
812
813 Returns a boolean value indicating if the matrix is periodic with
814 period $k. This is true if "$matrix ** ($k+1) == $matrix". When
815 "$k == 1", this reduces down to the "is_idempotent()" function.
816
817 · "$matrix->is_idempotent();"
818
819 Returns a boolean value indicating if the matrix is idempotent,
820 which is defined as the square of the matrix being equal to the
821 original matrix, i.e "$matrix ** 2 == $matrix".
822
823 · "$matrix->is_row_vector();"
824
825 Returns a boolean value indicating if the matrix is a row vector.
826 A row vector is a matrix which is 1xn. Note that the 1x1 matrix is
827 both a row and column vector.
828
829 · "$matrix->is_col_vector();"
830
831 Returns a boolean value indicating if the matrix is a col vector.
832 A col vector is a matrix which is nx1. Note that the 1x1 matrix is
833 both a row and column vector.
834
835 Eigensystems
836 · "($l, $V) = $matrix->sym_diagonalize();"
837
838 This method performs the diagonalization of the quadratic symmetric
839 matrix M stored in $matrix. On output, l is a column vector
840 containing all the eigenvalues of M and V is an orthogonal matrix
841 which columns are the corresponding normalized eigenvectors. The
842 primary property of an eigenvalue l and an eigenvector x is of course
843 that: M * x = l * x.
844
845 The method uses a Householder reduction to tridiagonal form followed
846 by a QL algoritm with implicit shifts on this tridiagonal. (The
847 tridiagonal matrix is kept internally in a compact form in this
848 routine to save memory.) In fact, this routine wraps the
849 householder() and tri_diagonalize() methods described below when
850 their intermediate results are not desired. The overall algorithmic
851 complexity of this technique is O(N^3). According to several books,
852 the coefficient hidden by the 'O' is one of the best possible for
853 general (symmetric) matrixes.
854
855 · "($T, $Q) = $matrix->householder();"
856
857 This method performs the Householder algorithm which reduces the n by
858 n real symmetric matrix M contained in $matrix to tridiagonal form.
859 On output, T is a symmetric tridiagonal matrix (only diagonal and
860 off-diagonal elements are non-zero) and Q is an orthogonal matrix
861 performing the tranformation between M and T ("$M == $Q * $T * ~$Q").
862
863 · "($l, $V) = $T->tri_diagonalize([$Q]);"
864
865 This method diagonalizes the symmetric tridiagonal matrix T. On
866 output, $l and $V are similar to the output values described for
867 sym_diagonalize().
868
869 The optional argument $Q corresponds to an orthogonal transformation
870 matrix Q that should be used additionally during V (eigenvectors)
871 computation. It should be supplied if the desired eigenvectors
872 correspond to a more general symmetric matrix M previously reduced by
873 the householder() method, not a mere tridiagonal. If T is really a
874 tridiagonal matrix, Q can be omitted (it will be internally created
875 in fact as an identity matrix). The method uses a QL algorithm (with
876 implicit shifts).
877
878 · "$l = $matrix->sym_eigenvalues();"
879
880 This method computes the eigenvalues of the quadratic symmetric
881 matrix M stored in $matrix. On output, l is a column vector
882 containing all the eigenvalues of M. Eigenvectors are not computed
883 (on the contrary of "sym_diagonalize()") and this method is more
884 efficient (even though it uses a similar algorithm with two phases).
885 However, understand that the algorithmic complexity of this technique
886 is still also O(N^3). But the coefficient hidden by the 'O' is better
887 by a factor of..., well, see your benchmark, it's wiser.
888
889 This routine wraps the householder_tridiagonal() and
890 tri_eigenvalues() methods described below when the intermediate
891 tridiagonal matrix is not needed.
892
893 · "$T = $matrix->householder_tridiagonal();"
894
895 This method performs the Householder algorithm which reduces the n by
896 n real symmetric matrix M contained in $matrix to tridiagonal form.
897 On output, T is the obtained symmetric tridiagonal matrix (only
898 diagonal and off-diagonal elements are non-zero). The operation is
899 similar to the householder() method, but potentially a little more
900 efficient as the transformation matrix is not computed.
901
902 · $l = $T->tri_eigenvalues();
903
904 This method computesthe eigenvalues of the symmetric tridiagonal
905 matrix T. On output, $l is a vector containing the eigenvalues
906 (similar to "sym_eigenvalues()"). This method is much more efficient
907 than tri_diagonalize() when eigenvectors are not needed.
908
909 Miscellaneous
910 · $matrix->zero();
911
912 Assigns a zero to every element of the matrix "$matrix", i.e.,
913 erases all values previously stored there, thereby effectively
914 transforming the matrix into a "zero"-matrix or "null"-matrix, the
915 neutral element of the addition operation in a Ring.
916
917 (For instance the (quadratic) matrices with "n" rows and columns
918 and matrix addition and multiplication form a Ring. Most prominent
919 characteristic of a Ring is that multiplication is not commutative,
920 i.e., in general, ""matrix1 * matrix2"" is not the same as
921 ""matrix2 * matrix1""!)
922
923 · $matrix->one();
924
925 Assigns one's to the elements on the main diagonal (elements (1,1),
926 (2,2), (3,3) and so on) of matrix "$matrix" and zero's to all
927 others, thereby erasing all values previously stored there and
928 transforming the matrix into a "one"-matrix, the neutral element of
929 the multiplication operation in a Ring.
930
931 (If the matrix is quadratic (which this method doesn't require,
932 though), then multiplying this matrix with itself yields this same
933 matrix again, and multiplying it with some other matrix leaves that
934 other matrix unchanged!)
935
936 · "$latex_string = $matrix->as_latex( align=> "c", format => "%s",
937 name => "" );"
938
939 This function returns the matrix as a LaTeX string. It takes a hash
940 as an argument which is used to control the style of the output.
941 The hash element "align" may be "c","l" or "r", corresponding to
942 center, left and right, respectively. The "format" element is a
943 format string that is given to "sprintf" to control the style of
944 number format, such a floating point or scientific notation. The
945 "name" element can be used so that a LaTeX string of "$name = " is
946 prepended to the string.
947
948 Example:
949
950 my $a = Math::MatrixReal->new_from_cols([[ 1.234, 5.678, 9.1011],[1,2,3]] );
951 print $a->as_latex( ( format => "%.2f", align => "l",name => "A" ) );
952
953 Output:
954 $A = $ $
955 \left( \begin{array}{ll}
956 1.23&1.00 \\
957 5.68&2.00 \\
958 9.10&3.00
959 \end{array} \right)
960 $
961
962 · "$yacas_string = $matrix->as_yacas( format => "%s", name => "",
963 semi => 0 );"
964
965 This function returns the matrix as a string that can be read by
966 Yacas. It takes a hash as an an argument which controls the style
967 of the output. The "format" element is a format string that is
968 given to "sprintf" to control the style of number format, such a
969 floating point or scientific notation. The "name" element can be
970 used so that "$name = " is prepended to the string. The <semi>
971 element can be set to 1 to that a semicolon is appended (so Matlab
972 does not print out the matrix.)
973
974 Example:
975
976 $a = Math::MatrixReal->new_from_cols([[ 1.234, 5.678, 9.1011],[1,2,3]] );
977 print $a->as_yacas( ( format => "%.2f", align => "l",name => "A" ) );
978
979 Output:
980
981 A := {{1.23,1.00},{5.68,2.00},{9.10,3.00}}
982
983 · "$matlab_string = $matrix->as_matlab( format => "%s", name => "",
984 semi => 0 );"
985
986 This function returns the matrix as a string that can be read by
987 Matlab. It takes a hash as an an argument which controls the style
988 of the output. The "format" element is a format string that is
989 given to "sprintf" to control the style of number format, such a
990 floating point or scientific notation. The "name" element can be
991 used so that "$name = " is prepended to the string. The <semi>
992 element can be set to 1 to that a semicolon is appended (so Matlab
993 does not print out the matrix.)
994
995 Example:
996
997 my $a = Math::MatrixReal->new_from_rows([[ 1.234, 5.678, 9.1011],[1,2,3]] );
998 print $a->as_matlab( ( format => "%.3f", name => "A",semi => 1 ) );
999
1000 Output:
1001 A = [ 1.234 5.678 9.101;
1002 1.000 2.000 3.000];
1003
1004 · "$scilab_string = $matrix->as_scilab( format => "%s", name => "",
1005 semi => 0 );"
1006
1007 This function is just an alias for "as_matlab()", since both Scilab
1008 and Matlab have the same matrix format.
1009
1010 · "$minimum = Math::MatrixReal::min($number1,$number2);" "$minimum =
1011 Math::MatrixReal::min($matrix);" "$minimum = $matrix-"min;>
1012
1013 Returns the minimum of the two numbers ""number1"" and ""number2""
1014 if called with two arguments, or returns the value of the smallest
1015 element of a matrix if called with one argument or as an object
1016 method.
1017
1018 · "$maximum = Math::MatrixReal::max($number1,$number2);" "$maximum =
1019 Math::MatrixReal::max($number1,$number2);" "$maximum =
1020 Math::MatrixReal::max($matrix);" "$maximum = $matrix-"max;>
1021
1022 Returns the maximum of the two numbers ""number1"" and ""number2""
1023 if called with two arguments, or returns the value of the largest
1024 element of a matrix if called with one arguemnt or as on object
1025 method.
1026
1027 · "$minimal_cost_matrix = $cost_matrix->kleene();"
1028
1029 Copies the matrix "$cost_matrix" (which has to be quadratic!) to a
1030 new matrix of the same size (i.e., "clones" the input matrix) and
1031 applies Kleene's algorithm to it.
1032
1033 See Math::Kleene(3) for more details about this algorithm!
1034
1035 The method returns an object reference to the new matrix.
1036
1037 Matrix "$cost_matrix" is not changed by this method in any way.
1038
1039 · "($norm_matrix,$norm_vector) = $matrix->normalize($vector);"
1040
1041 This method is used to improve the numerical stability when solving
1042 linear equation systems.
1043
1044 Suppose you have a matrix "A" and a vector "b" and you want to find
1045 out a vector "x" so that "A * x = b", i.e., the vector "x" which
1046 solves the equation system represented by the matrix "A" and the
1047 vector "b".
1048
1049 Applying this method to the pair (A,b) yields a pair (A',b') where
1050 each row has been divided by (the absolute value of) the greatest
1051 coefficient appearing in that row. So this coefficient becomes
1052 equal to "1" (or "-1") in the new pair (A',b') (all others become
1053 smaller than one and greater than minus one).
1054
1055 Note that this operation does not change the equation system itself
1056 because the same division is carried out on either side of the
1057 equation sign!
1058
1059 The method requires a quadratic (!) matrix "$matrix" and a vector
1060 "$vector" for input (the vector must be a column vector with the
1061 same number of rows as the input matrix) and returns a list of two
1062 items which are object references to a new matrix and a new vector,
1063 in this order.
1064
1065 The output matrix and vector are clones of the input matrix and
1066 vector to which the operation explained above has been applied.
1067
1068 The input matrix and vector are not changed by this in any way.
1069
1070 Example of how this method can affect the result of the methods to
1071 solve equation systems (explained immediately below following this
1072 method):
1073
1074 Consider the following little program:
1075
1076 #!perl -w
1077
1078 use Math::MatrixReal qw(new_from_string);
1079
1080 $A = Math::MatrixReal->new_from_string(<<"MATRIX");
1081 [ 1 2 3 ]
1082 [ 5 7 11 ]
1083 [ 23 19 13 ]
1084 MATRIX
1085
1086 $b = Math::MatrixReal->new_from_string(<<"MATRIX");
1087 [ 0 ]
1088 [ 1 ]
1089 [ 29 ]
1090 MATRIX
1091
1092 $LR = $A->decompose_LR();
1093 if (($dim,$x,$B) = $LR->solve_LR($b))
1094 {
1095 $test = $A * $x;
1096 print "x = \n$x";
1097 print "A * x = \n$test";
1098 }
1099
1100 ($A_,$b_) = $A->normalize($b);
1101
1102 $LR = $A_->decompose_LR();
1103 if (($dim,$x,$B) = $LR->solve_LR($b_))
1104 {
1105 $test = $A * $x;
1106 print "x = \n$x";
1107 print "A * x = \n$test";
1108 }
1109
1110 This will print:
1111
1112 x =
1113 [ 1.000000000000E+00 ]
1114 [ 1.000000000000E+00 ]
1115 [ -1.000000000000E+00 ]
1116 A * x =
1117 [ 4.440892098501E-16 ]
1118 [ 1.000000000000E+00 ]
1119 [ 2.900000000000E+01 ]
1120 x =
1121 [ 1.000000000000E+00 ]
1122 [ 1.000000000000E+00 ]
1123 [ -1.000000000000E+00 ]
1124 A * x =
1125 [ 0.000000000000E+00 ]
1126 [ 1.000000000000E+00 ]
1127 [ 2.900000000000E+01 ]
1128
1129 You can see that in the second example (where "normalize()" has
1130 been used), the result is "better", i.e., more accurate!
1131
1132 · "$LR_matrix = $matrix->decompose_LR();"
1133
1134 This method is needed to solve linear equation systems.
1135
1136 Suppose you have a matrix "A" and a vector "b" and you want to find
1137 out a vector "x" so that "A * x = b", i.e., the vector "x" which
1138 solves the equation system represented by the matrix "A" and the
1139 vector "b".
1140
1141 You might also have a matrix "A" and a whole bunch of different
1142 vectors "b1".."bk" for which you need to find vectors "x1".."xk" so
1143 that "A * xi = bi", for "i=1..k".
1144
1145 Using Gaussian transformations (multiplying a row or column with a
1146 factor, swapping two rows or two columns and adding a multiple of
1147 one row or column to another), it is possible to decompose any
1148 matrix "A" into two triangular matrices, called "L" and "R" (for
1149 "Left" and "Right").
1150
1151 "L" has one's on the main diagonal (the elements (1,1), (2,2),
1152 (3,3) and so so), non-zero values to the left and below of the main
1153 diagonal and all zero's in the upper right half of the matrix.
1154
1155 "R" has non-zero values on the main diagonal as well as to the
1156 right and above of the main diagonal and all zero's in the lower
1157 left half of the matrix, as follows:
1158
1159 [ 1 0 0 0 0 ] [ x x x x x ]
1160 [ x 1 0 0 0 ] [ 0 x x x x ]
1161 L = [ x x 1 0 0 ] R = [ 0 0 x x x ]
1162 [ x x x 1 0 ] [ 0 0 0 x x ]
1163 [ x x x x 1 ] [ 0 0 0 0 x ]
1164
1165 Note that ""L * R"" is equivalent to matrix "A" in the sense that
1166 "L * R * x = b <==> A * x = b" for all vectors "x", leaving out
1167 of account permutations of the rows and columns (these are taken
1168 care of "magically" by this module!) and numerical errors.
1169
1170 Trick:
1171
1172 Because we know that "L" has one's on its main diagonal, we can
1173 store both matrices together in the same array without information
1174 loss! I.e.,
1175
1176 [ R R R R R ]
1177 [ L R R R R ]
1178 LR = [ L L R R R ]
1179 [ L L L R R ]
1180 [ L L L L R ]
1181
1182 Beware, though, that "LR" and ""L * R"" are not the same!!!
1183
1184 Note also that for the same reason, you cannot apply the method
1185 "normalize()" to an "LR" decomposition matrix. Trying to do so will
1186 yield meaningless rubbish!
1187
1188 (You need to apply "normalize()" to each pair (Ai,bi) BEFORE
1189 decomposing the matrix "Ai'"!)
1190
1191 Now what does all this help us in solving linear equation systems?
1192
1193 It helps us because a triangular matrix is the next best thing that
1194 can happen to us besides a diagonal matrix (a matrix that has non-
1195 zero values only on its main diagonal - in which case the solution
1196 is trivial, simply divide ""b[i]"" by ""A[i,i]"" to get ""x[i]""!).
1197
1198 To find the solution to our problem ""A * x = b"", we divide this
1199 problem in parts: instead of solving "A * x = b" directly, we first
1200 decompose "A" into "L" and "R" and then solve ""L * y = b"" and
1201 finally ""R * x = y"" (motto: divide and rule!).
1202
1203 From the illustration above it is clear that solving ""L * y = b""
1204 and ""R * x = y"" is straightforward: we immediately know that
1205 "y[1] = b[1]". We then deduce swiftly that
1206
1207 y[2] = b[2] - L[2,1] * y[1]
1208
1209 (and we know ""y[1]"" by now!), that
1210
1211 y[3] = b[3] - L[3,1] * y[1] - L[3,2] * y[2]
1212
1213 and so on.
1214
1215 Having effortlessly calculated the vector "y", we now proceed to
1216 calculate the vector "x" in a similar fashion: we see immediately
1217 that "x[n] = y[n] / R[n,n]". It follows that
1218
1219 x[n-1] = ( y[n-1] - R[n-1,n] * x[n] ) / R[n-1,n-1]
1220
1221 and
1222
1223 x[n-2] = ( y[n-2] - R[n-2,n-1] * x[n-1] - R[n-2,n] * x[n] )
1224 / R[n-2,n-2]
1225
1226 and so on.
1227
1228 You can see that - especially when you have many vectors "b1".."bk"
1229 for which you are searching solutions to "A * xi = bi" - this
1230 scheme is much more efficient than a straightforward, "brute force"
1231 approach.
1232
1233 This method requires a quadratic matrix as its input matrix.
1234
1235 If you don't have that many equations, fill up with zero's (i.e.,
1236 do nothing to fill the superfluous rows if it's a "fresh" matrix,
1237 i.e., a matrix that has been created with "new()" or "shadow()").
1238
1239 The method returns an object reference to a new matrix containing
1240 the matrices "L" and "R".
1241
1242 The input matrix is not changed by this method in any way.
1243
1244 Note that you can "copy()" or "clone()" the result of this method
1245 without losing its "magical" properties (for instance concerning
1246 the hidden permutations of its rows and columns).
1247
1248 However, as soon as you are applying any method that alters the
1249 contents of the matrix, its "magical" properties are stripped off,
1250 and the matrix immediately reverts to an "ordinary" matrix (with
1251 the values it just happens to contain at that moment, be they
1252 meaningful as an ordinary matrix or not!).
1253
1254 · "($dimension,$x_vector,$base_matrix) =
1255 $LR_matrix""->""solve_LR($b_vector);"
1256
1257 Use this method to actually solve an equation system.
1258
1259 Matrix "$LR_matrix" must be a (quadratic) matrix returned by the
1260 method "decompose_LR()", the LR decomposition matrix of the matrix
1261 "A" of your equation system "A * x = b".
1262
1263 The input vector "$b_vector" is the vector "b" in your equation
1264 system "A * x = b", which must be a column vector and have the same
1265 number of rows as the input matrix "$LR_matrix".
1266
1267 The method returns a list of three items if a solution exists or an
1268 empty list otherwise (!).
1269
1270 Therefore, you should always use this method like this:
1271
1272 if ( ($dim,$x_vec,$base) = $LR->solve_LR($b_vec) )
1273 {
1274 # do something with the solution...
1275 }
1276 else
1277 {
1278 # do something with the fact that there is no solution...
1279 }
1280
1281 The three items returned are: the dimension "$dimension" of the
1282 solution space (which is zero if only one solution exists, one if
1283 the solution is a straight line, two if the solution is a plane,
1284 and so on), the solution vector "$x_vector" (which is the vector
1285 "x" of your equation system "A * x = b") and a matrix
1286 "$base_matrix" representing a base of the solution space (a set of
1287 vectors which put up the solution space like the spokes of an
1288 umbrella).
1289
1290 Only the first "$dimension" columns of this base matrix actually
1291 contain entries, the remaining columns are all zero.
1292
1293 Now what is all this stuff with that "base" good for?
1294
1295 The output vector "x" is ALWAYS a solution of your equation system
1296 "A * x = b".
1297
1298 But also any vector "$vector"
1299
1300 $vector = $x_vector->clone();
1301
1302 $machine_infinity = 1E+99; # or something like that
1303
1304 for ( $i = 1; $i <= $dimension; $i++ )
1305 {
1306 $vector += rand($machine_infinity) * $base_matrix->column($i);
1307 }
1308
1309 is a solution to your problem "A * x = b", i.e., if "$A_matrix"
1310 contains your matrix "A", then
1311
1312 print abs( $A_matrix * $vector - $b_vector ), "\n";
1313
1314 should print a number around 1E-16 or so!
1315
1316 By the way, note that you can actually calculate those vectors
1317 "$vector" a little more efficient as follows:
1318
1319 $rand_vector = $x_vector->shadow();
1320
1321 $machine_infinity = 1E+99; # or something like that
1322
1323 for ( $i = 1; $i <= $dimension; $i++ )
1324 {
1325 $rand_vector->assign($i,1, rand($machine_infinity) );
1326 }
1327
1328 $vector = $x_vector + ( $base_matrix * $rand_vector );
1329
1330 Note that the input matrix and vector are not changed by this
1331 method in any way.
1332
1333 · "$inverse_matrix = $LR_matrix->invert_LR();"
1334
1335 Use this method to calculate the inverse of a given matrix
1336 "$LR_matrix", which must be a (quadratic) matrix returned by the
1337 method "decompose_LR()".
1338
1339 The method returns an object reference to a new matrix of the same
1340 size as the input matrix containing the inverse of the matrix that
1341 you initially fed into "decompose_LR()" IF THE INVERSE EXISTS, or
1342 an empty list otherwise.
1343
1344 Therefore, you should always use this method in the following way:
1345
1346 if ( $inverse_matrix = $LR->invert_LR() )
1347 {
1348 # do something with the inverse matrix...
1349 }
1350 else
1351 {
1352 # do something with the fact that there is no inverse matrix...
1353 }
1354
1355 Note that by definition (disregarding numerical errors), the
1356 product of the initial matrix and its inverse (or vice-versa) is
1357 always a matrix containing one's on the main diagonal (elements
1358 (1,1), (2,2), (3,3) and so on) and zero's elsewhere.
1359
1360 The input matrix is not changed by this method in any way.
1361
1362 · "$condition = $matrix->condition($inverse_matrix);"
1363
1364 In fact this method is just a shortcut for
1365
1366 abs($matrix) * abs($inverse_matrix)
1367
1368 Both input matrices must be quadratic and have the same size, and
1369 the result is meaningful only if one of them is the inverse of the
1370 other (for instance, as returned by the method "invert_LR()").
1371
1372 The number returned is a measure of the "condition" of the given
1373 matrix "$matrix", i.e., a measure of the numerical stability of the
1374 matrix.
1375
1376 This number is always positive, and the smaller its value, the
1377 better the condition of the matrix (the better the stability of all
1378 subsequent computations carried out using this matrix).
1379
1380 Numerical stability means for example that if
1381
1382 abs( $vec_correct - $vec_with_error ) < $epsilon
1383
1384 holds, there must be a "$delta" which doesn't depend on the vector
1385 "$vec_correct" (nor "$vec_with_error", by the way) so that
1386
1387 abs( $matrix * $vec_correct - $matrix * $vec_with_error ) < $delta
1388
1389 also holds.
1390
1391 · "$determinant = $LR_matrix->det_LR();"
1392
1393 Calculates the determinant of a matrix, whose LR decomposition
1394 matrix "$LR_matrix" must be given (which must be a (quadratic)
1395 matrix returned by the method "decompose_LR()").
1396
1397 In fact the determinant is a by-product of the LR decomposition: It
1398 is (in principle, that is, except for the sign) simply the product
1399 of the elements on the main diagonal (elements (1,1), (2,2), (3,3)
1400 and so on) of the LR decomposition matrix.
1401
1402 (The sign is taken care of "magically" by this module)
1403
1404 · "$order = $LR_matrix->order_LR();"
1405
1406 Calculates the order (called "Rang" in German) of a matrix, whose
1407 LR decomposition matrix "$LR_matrix" must be given (which must be a
1408 (quadratic) matrix returned by the method "decompose_LR()").
1409
1410 This number is a measure of the number of linear independent row
1411 and column vectors (= number of linear independent equations in the
1412 case of a matrix representing an equation system) of the matrix
1413 that was initially fed into "decompose_LR()".
1414
1415 If "n" is the number of rows and columns of the (quadratic!)
1416 matrix, then "n - order" is the dimension of the solution space of
1417 the associated equation system.
1418
1419 · "$rank = $LR_matrix->rank_LR();"
1420
1421 This is an alias for the "order_LR()" function. The "order" is
1422 usually called the "rank" in the United States.
1423
1424 · "$scalar_product = $vector1->scalar_product($vector2);"
1425
1426 Returns the scalar product of vector "$vector1" and vector
1427 "$vector2".
1428
1429 Both vectors must be column vectors (i.e., a matrix having several
1430 rows but only one column).
1431
1432 This is a (more efficient!) shortcut for
1433
1434 $temp = ~$vector1 * $vector2;
1435 $scalar_product = $temp->element(1,1);
1436
1437 or the sum "i=1..n" of the products "vector1[i] * vector2[i]".
1438
1439 Provided none of the two input vectors is the null vector, then the
1440 two vectors are orthogonal, i.e., have an angle of 90 degrees
1441 between them, exactly when their scalar product is zero, and vice-
1442 versa.
1443
1444 · "$vector_product = $vector1->vector_product($vector2);"
1445
1446 Returns the vector product of vector "$vector1" and vector
1447 "$vector2".
1448
1449 Both vectors must be column vectors (i.e., a matrix having several
1450 rows but only one column).
1451
1452 Currently, the vector product is only defined for 3 dimensions
1453 (i.e., vectors with 3 rows); all other vectors trigger an error
1454 message.
1455
1456 In 3 dimensions, the vector product of two vectors "x" and "y" is
1457 defined as
1458
1459 | x[1] y[1] e[1] |
1460 determinant | x[2] y[2] e[2] |
1461 | x[3] y[3] e[3] |
1462
1463 where the ""x[i]"" and ""y[i]"" are the components of the two
1464 vectors "x" and "y", respectively, and the ""e[i]"" are unity
1465 vectors (i.e., vectors with a length equal to one) with a one in
1466 row "i" and zero's elsewhere (this means that you have numbers and
1467 vectors as elements in this matrix!).
1468
1469 This determinant evaluates to the rather simple formula
1470
1471 z[1] = x[2] * y[3] - x[3] * y[2]
1472 z[2] = x[3] * y[1] - x[1] * y[3]
1473 z[3] = x[1] * y[2] - x[2] * y[1]
1474
1475 A characteristic property of the vector product is that the
1476 resulting vector is orthogonal to both of the input vectors (if
1477 neither of both is the null vector, otherwise this is trivial),
1478 i.e., the scalar product of each of the input vectors with the
1479 resulting vector is always zero.
1480
1481 · "$length = $vector->length();"
1482
1483 This is actually a shortcut for
1484
1485 $length = sqrt( $vector->scalar_product($vector) );
1486
1487 and returns the length of a given column or row vector "$vector".
1488
1489 Note that the "length" calculated by this method is in fact the
1490 "two"-norm (also know as the Euclidean norm) of a vector "$vector"!
1491
1492 The general definition for norms of vectors is the following:
1493
1494 sub vector_norm
1495 {
1496 croak "Usage: \$norm = \$vector->vector_norm(\$n);"
1497 if (@_ != 2);
1498
1499 my($vector,$n) = @_;
1500 my($rows,$cols) = ($vector->[1],$vector->[2]);
1501 my($k,$comp,$sum);
1502
1503 croak "Math::MatrixReal::vector_norm(): vector is not a column vector"
1504 unless ($cols == 1);
1505
1506 croak "Math::MatrixReal::vector_norm(): norm index must be > 0"
1507 unless ($n > 0);
1508
1509 croak "Math::MatrixReal::vector_norm(): norm index must be integer"
1510 unless ($n == int($n));
1511
1512 $sum = 0;
1513 for ( $k = 0; $k < $rows; $k++ )
1514 {
1515 $comp = abs( $vector->[0][$k][0] );
1516 $sum += $comp ** $n;
1517 }
1518 return( $sum ** (1 / $n) );
1519 }
1520
1521 Note that the case "n = 1" is the "one"-norm for matrices applied
1522 to a vector, the case "n = 2" is the euclidian norm or length of a
1523 vector, and if "n" goes to infinity, you have the "infinity"- or
1524 "maximum"-norm for matrices applied to a vector!
1525
1526 · "$xn_vector = $matrix->""solve_GSM($x0_vector,$b_vector,$epsilon);"
1527
1528 · "$xn_vector = $matrix->""solve_SSM($x0_vector,$b_vector,$epsilon);"
1529
1530 · "$xn_vector =
1531 $matrix->""solve_RM($x0_vector,$b_vector,$weight,$epsilon);"
1532
1533 In some cases it might not be practical or desirable to solve an
1534 equation system ""A * x = b"" using an analytical algorithm like
1535 the "decompose_LR()" and "solve_LR()" method pair.
1536
1537 In fact in some cases, due to the numerical properties (the
1538 "condition") of the matrix "A", the numerical error of the obtained
1539 result can be greater than by using an approximative (iterative)
1540 algorithm like one of the three implemented here.
1541
1542 All three methods, GSM ("Global Step Method" or
1543 "Gesamtschrittverfahren"), SSM ("Single Step Method" or
1544 "Einzelschrittverfahren") and RM ("Relaxation Method" or
1545 "Relaxationsverfahren"), are fix-point iterations, that is, can be
1546 described by an iteration function ""x(t+1) = Phi( x(t) )"" which
1547 has the property:
1548
1549 Phi(x) = x <==> A * x = b
1550
1551 We can define "Phi(x)" as follows:
1552
1553 Phi(x) := ( En - A ) * x + b
1554
1555 where "En" is a matrix of the same size as "A" ("n" rows and
1556 columns) with one's on its main diagonal and zero's elsewhere.
1557
1558 This function has the required property.
1559
1560 Proof:
1561
1562 A * x = b
1563
1564 <==> -( A * x ) = -b
1565
1566 <==> -( A * x ) + x = -b + x
1567
1568 <==> -( A * x ) + x + b = x
1569
1570 <==> x - ( A * x ) + b = x
1571
1572 <==> ( En - A ) * x + b = x
1573
1574 This last step is true because
1575
1576 x[i] - ( a[i,1] x[1] + ... + a[i,i] x[i] + ... + a[i,n] x[n] ) + b[i]
1577
1578 is the same as
1579
1580 ( -a[i,1] x[1] + ... + (1 - a[i,i]) x[i] + ... + -a[i,n] x[n] ) + b[i]
1581
1582 qed
1583
1584 Note that actually solving the equation system ""A * x = b"" means
1585 to calculate
1586
1587 a[i,1] x[1] + ... + a[i,i] x[i] + ... + a[i,n] x[n] = b[i]
1588
1589 <==> a[i,i] x[i] =
1590 b[i]
1591 - ( a[i,1] x[1] + ... + a[i,i] x[i] + ... + a[i,n] x[n] )
1592 + a[i,i] x[i]
1593
1594 <==> x[i] =
1595 ( b[i]
1596 - ( a[i,1] x[1] + ... + a[i,i] x[i] + ... + a[i,n] x[n] )
1597 + a[i,i] x[i]
1598 ) / a[i,i]
1599
1600 <==> x[i] =
1601 ( b[i] -
1602 ( a[i,1] x[1] + ... + a[i,i-1] x[i-1] +
1603 a[i,i+1] x[i+1] + ... + a[i,n] x[n] )
1604 ) / a[i,i]
1605
1606 There is one major restriction, though: a fix-point iteration is
1607 guaranteed to converge only if the first derivative of the
1608 iteration function has an absolute value less than one in an area
1609 around the point "x(*)" for which ""Phi( x(*) ) = x(*)"" is to be
1610 true, and if the start vector "x(0)" lies within that area!
1611
1612 This is best verified graphically, which unfortunately is
1613 impossible to do in this textual documentation!
1614
1615 See literature on Numerical Analysis for details!
1616
1617 In our case, this restriction translates to the following three
1618 conditions:
1619
1620 There must exist a norm so that the norm of the matrix of the
1621 iteration function, "( En - A )", has a value less than one, the
1622 matrix "A" may not have any zero value on its main diagonal and the
1623 initial vector "x(0)" must be "good enough", i.e., "close enough"
1624 to the solution "x(*)".
1625
1626 (Remember school math: the first derivative of a straight line
1627 given by ""y = a * x + b"" is "a"!)
1628
1629 The three methods expect a (quadratic!) matrix "$matrix" as their
1630 first argument, a start vector "$x0_vector", a vector "$b_vector"
1631 (which is the vector "b" in your equation system ""A * x = b""), in
1632 the case of the "Relaxation Method" ("RM"), a real number "$weight"
1633 best between zero and two, and finally an error limit (real number)
1634 "$epsilon".
1635
1636 (Note that the weight "$weight" used by the "Relaxation Method"
1637 ("RM") is NOT checked to lie within any reasonable range!)
1638
1639 The three methods first test the first two conditions of the three
1640 conditions listed above and return an empty list if these
1641 conditions are not fulfilled.
1642
1643 Therefore, you should always test their return value using some
1644 code like:
1645
1646 if ( $xn_vector = $A_matrix->solve_GSM($x0_vector,$b_vector,1E-12) )
1647 {
1648 # do something with the solution...
1649 }
1650 else
1651 {
1652 # do something with the fact that there is no solution...
1653 }
1654
1655 Otherwise, they iterate until "abs( Phi(x) - x ) < epsilon".
1656
1657 (Beware that theoretically, infinite loops might result if the
1658 starting vector is too far "off" the solution! In practice, this
1659 shouldn't be a problem. Anyway, you can always press <ctrl-C> if
1660 you think that the iteration takes too long!)
1661
1662 The difference between the three methods is the following:
1663
1664 In the "Global Step Method" ("GSM"), the new vector ""x(t+1)""
1665 (called "y" here) is calculated from the vector "x(t)" (called "x"
1666 here) according to the formula:
1667
1668 y[i] =
1669 ( b[i]
1670 - ( a[i,1] x[1] + ... + a[i,i-1] x[i-1] +
1671 a[i,i+1] x[i+1] + ... + a[i,n] x[n] )
1672 ) / a[i,i]
1673
1674 In the "Single Step Method" ("SSM"), the components of the vector
1675 ""x(t+1)"" which have already been calculated are used to calculate
1676 the remaining components, i.e.
1677
1678 y[i] =
1679 ( b[i]
1680 - ( a[i,1] y[1] + ... + a[i,i-1] y[i-1] + # note the "y[]"!
1681 a[i,i+1] x[i+1] + ... + a[i,n] x[n] ) # note the "x[]"!
1682 ) / a[i,i]
1683
1684 In the "Relaxation method" ("RM"), the components of the vector
1685 ""x(t+1)"" are calculated by "mixing" old and new value (like cold
1686 and hot water), and the weight "$weight" determines the "aperture"
1687 of both the "hot water tap" as well as of the "cold water tap",
1688 according to the formula:
1689
1690 y[i] =
1691 ( b[i]
1692 - ( a[i,1] y[1] + ... + a[i,i-1] y[i-1] + # note the "y[]"!
1693 a[i,i+1] x[i+1] + ... + a[i,n] x[n] ) # note the "x[]"!
1694 ) / a[i,i]
1695 y[i] = weight * y[i] + (1 - weight) * x[i]
1696
1697 Note that the weight "$weight" should be greater than zero and less
1698 than two (!).
1699
1700 The three methods are supposed to be of different efficiency.
1701 Experiment!
1702
1703 Remember that in most cases, it is probably advantageous to first
1704 "normalize()" your equation system prior to solving it!
1705
1707 SYNOPSIS
1708 · Unary operators:
1709
1710 ""-"", ""~"", ""abs"", "test", ""!"", '""'
1711
1712 · Binary operators:
1713
1714 "".""
1715
1716 Binary (arithmetic) operators:
1717
1718 ""+"", ""-"", ""*"", ""**"", ""+="", ""-="", ""*="", ""/="",""**=""
1719
1720 · Binary (relational) operators:
1721
1722 ""=="", ""!="", ""<"", ""<="", "">"", "">=""
1723
1724 ""eq"", ""ne"", ""lt"", ""le"", ""gt"", ""ge""
1725
1726 Note that the latter (""eq"", ""ne"", ... ) are just synonyms of the
1727 former (""=="", ""!="", ... ), defined for convenience only.
1728
1729 DESCRIPTION
1730 '.' Concatenation
1731
1732 Returns the two matrices concatenated side by side.
1733
1734 Example: $c = $a . $b;
1735
1736 For example, if
1737
1738 $a=[ 1 2 ] $b=[ 5 6 ]
1739 [ 3 4 ] [ 7 8 ]
1740 then
1741
1742 $c=[ 1 2 5 6 ]
1743 [ 3 4 7 8 ]
1744
1745 Note that only matrices with the same number of rows may be
1746 concatenated.
1747
1748 '-' Unary minus
1749
1750 Returns the negative of the given matrix, i.e., the matrix with
1751 all elements multiplied with the factor "-1".
1752
1753 Example:
1754
1755 $matrix = -$matrix;
1756
1757 '~' Transposition
1758
1759 Returns the transposed of the given matrix.
1760
1761 Examples:
1762
1763 $temp = ~$vector * $vector;
1764 $length = sqrt( $temp->element(1,1) );
1765
1766 if (~$matrix == $matrix) { # matrix is symmetric ... }
1767
1768 abs Norm
1769
1770 Returns the "one"-Norm of the given matrix.
1771
1772 Example:
1773
1774 $error = abs( $A * $x - $b );
1775
1776 test Boolean test
1777
1778 Tests wether there is at least one non-zero element in the matrix.
1779
1780 Example:
1781
1782 if ($xn_vector) { # result of iteration is not zero ... }
1783
1784 '!' Negated boolean test
1785
1786 Tests wether the matrix contains only zero's.
1787
1788 Examples:
1789
1790 if (! $b_vector) { # heterogenous equation system ... }
1791 else { # homogenous equation system ... }
1792
1793 unless ($x_vector) { # not the null-vector! }
1794
1795 '""""'
1796 "Stringify" operator
1797
1798 Converts the given matrix into a string.
1799
1800 Uses scientific representation to keep precision loss to a minimum
1801 in case you want to read this string back in again later with
1802 "new_from_string()".
1803
1804 By default a 13-digit mantissa and a 20-character field for each
1805 element is used so that lines will wrap nicely on an 80-column
1806 screen.
1807
1808 Examples:
1809
1810 $matrix = Math::MatrixReal->new_from_string(<<"MATRIX");
1811 [ 1 0 ]
1812 [ 0 -1 ]
1813 MATRIX
1814 print "$matrix";
1815
1816 [ 1.000000000000E+00 0.000000000000E+00 ]
1817 [ 0.000000000000E+00 -1.000000000000E+00 ]
1818
1819 $string = "$matrix";
1820 $test = Math::MatrixReal->new_from_string($string);
1821 if ($test == $matrix) { print ":-)\n"; } else { print ":-(\n"; }
1822
1823 '+' Addition
1824
1825 Returns the sum of the two given matrices.
1826
1827 Examples:
1828
1829 $matrix_S = $matrix_A + $matrix_B;
1830
1831 $matrix_A += $matrix_B;
1832
1833 '-' Subtraction
1834
1835 Returns the difference of the two given matrices.
1836
1837 Examples:
1838
1839 $matrix_D = $matrix_A - $matrix_B;
1840
1841 $matrix_A -= $matrix_B;
1842
1843 Note that this is the same as:
1844
1845 $matrix_S = $matrix_A + -$matrix_B;
1846
1847 $matrix_A += -$matrix_B;
1848
1849 (The latter are less efficient, though)
1850
1851 '*' Multiplication
1852
1853 Returns the matrix product of the two given matrices or the
1854 product of the given matrix and scalar factor.
1855
1856 Examples:
1857
1858 $matrix_P = $matrix_A * $matrix_B;
1859
1860 $matrix_A *= $matrix_B;
1861
1862 $vector_b = $matrix_A * $vector_x;
1863
1864 $matrix_B = -1 * $matrix_A;
1865
1866 $matrix_B = $matrix_A * -1;
1867
1868 $matrix_A *= -1;
1869
1870 '/' Division
1871
1872 Currently a shortcut for doing $a * $b ** -1 is $a / $b, which
1873 works for square matrices. One can also use 1/$a .
1874
1875 '**' Exponentiation
1876
1877 Returns the matrix raised to an integer power. If 0 is passed, the
1878 identity matrix is returned. If a negative integer is passed, it
1879 computes the inverse (if it exists) and then raised the inverse to
1880 the absolute value of the integer. The matrix must be quadratic.
1881
1882 Examples:
1883
1884 $matrix2 = $matrix ** 2;
1885
1886 $matrix **= 2;
1887
1888 $inv2 = $matrix ** -2;
1889
1890 $ident = $matrix ** 0;
1891
1892 '==' Equality
1893
1894 Tests two matrices for equality.
1895
1896 Example:
1897
1898 if ( $A * $x == $b ) { print "EUREKA!\n"; }
1899
1900 Note that in most cases, due to numerical errors (due to the
1901 finite precision of computer arithmetics), it is a bad idea to
1902 compare two matrices or vectors this way.
1903
1904 Better use the norm of the difference of the two matrices you want
1905 to compare and compare that norm with a small number, like this:
1906
1907 if ( abs( $A * $x - $b ) < 1E-12 ) { print "BINGO!\n"; }
1908
1909 '!=' Inequality
1910
1911 Tests two matrices for inequality.
1912
1913 Example:
1914
1915 while ($x0_vector != $xn_vector) { # proceed with iteration ... }
1916
1917 (Stops when the iteration becomes stationary)
1918
1919 Note that (just like with the '==' operator), it is usually a bad
1920 idea to compare matrices or vectors this way. Compare the norm of
1921 the difference of the two matrices with a small number instead.
1922
1923 '<' Less than
1924
1925 Examples:
1926
1927 if ( $matrix1 < $matrix2 ) { # ... }
1928
1929 if ( $vector < $epsilon ) { # ... }
1930
1931 if ( 1E-12 < $vector ) { # ... }
1932
1933 if ( $A * $x - $b < 1E-12 ) { # ... }
1934
1935 These are just shortcuts for saying:
1936
1937 if ( abs($matrix1) < abs($matrix2) ) { # ... }
1938
1939 if ( abs($vector) < abs($epsilon) ) { # ... }
1940
1941 if ( abs(1E-12) < abs($vector) ) { # ... }
1942
1943 if ( abs( $A * $x - $b ) < abs(1E-12) ) { # ... }
1944
1945 Uses the "one"-norm for matrices and Perl's built-in "abs()" for
1946 scalars.
1947
1948 '<=' Less than or equal
1949
1950 As with the '<' operator, this is just a shortcut for the same
1951 expression with "abs()" around all arguments.
1952
1953 Example:
1954
1955 if ( $A * $x - $b <= 1E-12 ) { # ... }
1956
1957 which in fact is the same as:
1958
1959 if ( abs( $A * $x - $b ) <= abs(1E-12) ) { # ... }
1960
1961 Uses the "one"-norm for matrices and Perl's built-in "abs()" for
1962 scalars.
1963
1964 '>' Greater than
1965
1966 As with the '<' and '<=' operator, this
1967
1968 if ( $xn - $x0 > 1E-12 ) { # ... }
1969
1970 is just a shortcut for:
1971
1972 if ( abs( $xn - $x0 ) > abs(1E-12) ) { # ... }
1973
1974 Uses the "one"-norm for matrices and Perl's built-in "abs()" for
1975 scalars.
1976
1977 '>=' Greater than or equal
1978
1979 As with the '<', '<=' and '>' operator, the following
1980
1981 if ( $LR >= $A ) { # ... }
1982
1983 is simply a shortcut for:
1984
1985 if ( abs($LR) >= abs($A) ) { # ... }
1986
1987 Uses the "one"-norm for matrices and Perl's built-in "abs()" for
1988 scalars.
1989
1991 Math::VectorReal, Math::PARI, Math::MatrixBool, Math::Vec, DFA::Kleene,
1992 Math::Kleene, Set::IntegerRange, Set::IntegerFast .
1993
1995 This man page documents Math::MatrixReal version 2.05.
1996
1997 The latest released version can always be found at
1998 http://leto.net/code/Math-MatrixReal/ and there is also a subversion
1999 repository available at http://leto.net/svn/Math-MatrixReal/ .
2000
2002 Steffen Beyer <sb@engelschall.com>, Rodolphe Ortalo <ortalo@laas.fr>,
2003 Jonathan Leto <jonathan@leto.net>.
2004
2005 Currently maintained by Jonathan Leto, send all bugs/patches to me.
2006
2008 Many thanks to Prof. Pahlings for stoking the fire of my enthusiasm for
2009 Algebra and Linear Algebra at the university (RWTH Aachen, Germany),
2010 and to Prof. Esser and his assistant, Mr. Jarausch, for their
2011 fascinating lectures in Numerical Analysis!
2012
2014 Copyright (c) 1996-2008 by Steffen Beyer, Rodolphe Ortalo, Jonathan
2015 Leto. All rights reserved.
2016
2018 This package is free software; you can redistribute it and/or modify it
2019 under the same terms as Perl itself.
2020
2021
2022
2023perl v5.12.0 2010-05-03 Math::MatrixReal(3)