1MK-PARALLEL-DUMP(1)   User Contributed Perl Documentation  MK-PARALLEL-DUMP(1)
2
3
4

NAME

6       mk-parallel-dump - Dump MySQL tables in parallel.
7

SYNOPSIS

9       Dump all databases and tables to the current directory:
10
11         mk-parallel-dump
12
13       Dump all databases and tables via SELECT INTO OUTFILE to /tmp/dumps:
14
15         mk-parallel-dump --tab --base-dir /tmp/dumps
16
17       Dump only table db.foo in chunks of ten thousand rows using 8 threads:
18
19         mk-parallel-dump --databases db --tables foo \
20            --chunk-size 10000 --threads 8
21
22       Dump tables in chunks of approximately 10kb of data (not ten thousand
23       rows!):
24
25         mk-parallel-dump --chunk-size 10k
26

RISKS

28       The following section is included to inform users about the potential
29       risks, whether known or unknown, of using this tool.  The two main
30       categories of risks are those created by the nature of the tool (e.g.
31       read-only tools vs. read-write tools) and those created by bugs.
32
33       mk-parallel-dump is not a backup program!  It is only designed for fast
34       data exports, for purposes such as quickly loading data into test
35       systems.  Do not use mk-parallel-dump for backups.
36
37       At the time of this release there is a bug that prevents
38       "--lock-tables" from working correctly and an unconfirmed bug that
39       prevents the tool from finishing.
40
41       The authoritative source for updated information is always the online
42       issue tracking system.  Issues that affect this tool will be marked as
43       such.  You can see a list of such issues at the following URL:
44       http://www.maatkit.org/bugs/mk-parallel-dump
45       <http://www.maatkit.org/bugs/mk-parallel-dump>.
46
47       See also "BUGS" for more information on filing bugs and getting help.
48

DESCRIPTION

50       mk-parallel-dump connects to a MySQL server, finds database and table
51       names, and dumps them in parallel for speed.  Only tables and data are
52       dumped; view definitions or any kind of stored code (triggers, events,
53       routines, procedures, etc.) are not dumped.  However, if you dump the
54       "mysql" database, you'll be dumping the stored routines anyway.
55
56       Exit status is 0 if everything went well, 1 if any chunks failed, and
57       any other value indicates an internal error.
58
59       To dump all tables to uncompressed text files in the current directory,
60       each database with its own directory, with a global read lock, flushing
61       and recording binary log positions, each table in a single file:
62
63         mk-parallel-dump
64
65       To dump tables elsewhere:
66
67         mk-parallel-dump --base-dir /path/to/elsewhere
68
69       To dump to tab-separated files with "SELECT INTO OUTFILE", each table
70       with separate data and SQL files:
71
72         mk-parallel-dump --tab
73
74       mk-parallel-dump doesn't clean out any destination directories before
75       dumping into them.  You can move away the old destination, then remove
76       it after a successful dump, with a shell script like the following:
77
78          #!/bin/sh
79          CNT=`ls | grep -c old`;
80          if [ -d default ]; then mv default default.old.$CNT;
81          mk-parallel-dump
82          if [ $? != 0 ]
83          then
84             echo "There were errors, not purging old sets."
85          else
86             echo "No errors during dump, purging old sets."
87             rm -rf default.old.*
88          fi
89
90       mk-parallel-dump checks whether files have been created before dumping.
91       If the file has been created, it skips the table or chunk that would
92       have created the file.  This makes it possible to resume dumps.  If you
93       don't want this behavior, and instead you want a full dump, then move
94       away the old files or specify "--[no]resume".
95

CHUNKS

97       mk-parallel-dump can break your tables into chunks when dumping, and
98       put approximately the amount of data you specify into each chunk.  This
99       is useful for two reasons:
100
101       ·   A table that is dumped in chunks can be dumped in many threads
102           simultaneously.
103
104       ·   Dumping in chunks creates small files, which can be imported more
105           efficiently and safely.  Importing a single huge file can be a lot
106           of extra work for transactional storage engines like InnoDB.  A
107           huge file can create a huge rollback segment in your tablespace.
108           If the import fails, the rollback can take a very long time.
109
110       To dump in chunks, specify the "--chunk-size" option.  This option is
111       an integer with an optional suffix.  Without the suffix, it's the
112       number of rows you want in each chunk.  With the suffix, it's the
113       approximate size of the data.
114
115       mk-parallel-dump tries to use index statistics to calculate where the
116       boundaries between chunks should be.  If the values are not evenly
117       distributed, some chunks can have a lot of rows, and others may have
118       very few or even none.  Some chunks can exceed the size you want.
119
120       When you specify the size with a suffix, the allowed suffixes are k, M
121       and G, for kibibytes, mebibytes, and gibibytes, respectively.  mk-
122       parallel-dump doesn't know anything about data size.  It asks MySQL
123       (via "SHOW TABLE STATUS") how long an average row is in the table, and
124       converts your option to a number of rows.
125
126       Not all tables can be broken into chunks.  mk-parallel-dump looks for
127       an index whose leading column is numeric (integers, real numbers, and
128       date and time types).  It prefers the primary key if its first column
129       is chunk-able.  Otherwise it chooses the first chunk-able column in the
130       table.
131
132       Generating a series of "WHERE" clauses to divide a table into evenly-
133       sized chunks is difficult.  If you have any ideas on how to improve the
134       algorithm, please write to the author (see "BUGS").
135

OUTPUT

137       Output depends on "--verbose", "--progress", "--dry-run" and "--quiet".
138       If "--dry-run" is specified mk-parallel-dump prints the commands or SQL
139       statements that it would use to dump data but it does not actually dump
140       any data.  If "--quiet" is specified there is no output; this overrides
141       all other options that affect the output.
142
143       The default output is something like the following example:
144
145         CHUNK  TIME  EXIT  SKIPPED DATABASE.TABLE
146            db  0.28     0        0 sakila
147           all  0.28     0        0 -
148
149       CHUNK
150           The CHUNK column signifies what kind of information is in the line:
151
152             Value  Meaning
153             =====  ========================================================
154             db     This line contains summary information about a database.
155             tbl    This line contains summary information about a table.
156             <int>  This line contains information about the Nth chunk of a
157                    table.
158
159           The types of lines you'll see depend on the "--chunk-size" option
160           and "--verbose" options.  mk-parallel-dump treats evrything as a
161           chunk.  If you don't specify "--chunk-size", then each table is one
162           big chunk and each database is a chunk (of all its tables).  Thus,
163           there is output for numbered table chunks ("--chunk-size"), table
164           chunks, and database chunks.
165
166       TIME
167           The TIME column shows the wallclock time elapsed while the chunk
168           was dumped.  If CHUNK is "db" or "tbl", this time is the total
169           wallclock time elapsed for the database or table.
170
171       EXIT
172           The EXIT column shows the exit status of the chunk.  Any non-zero
173           exit signifies an error.  The cause of errors are usually printed
174           to STDERR.
175
176       SKIPPED
177           The SKIPPED column shows how many chunks were skipped.  These are
178           not errors.  Chunks are skipped if the dump can be resumed.  See
179           "--[no]resume".
180
181       DATABASE.TABLE
182           The DATABASE.TABLE column shows to which table the chunk belongs.
183           For "db" chunks, this shows just the database.  Chunks are printed
184           when they complete, and this is often out of the order you'd
185           expect.  For example, you might see a chunk for db1.table_1, then a
186           chunk for db2.table_2, then another chunk for db1.table_1, then the
187           "db" chunk summary for db2.
188
189       PROGRESS
190           If you specify "--progress", then the tool adds a PROGRESS column
191           after DATABASE.TABLE, which contains text similar to the following:
192
193             PROGRESS
194             4.10M/4.10M 100.00% ETA ... 00:00 (2009-10-16T15:37:49)
195             done at 2009-10-16T15:37:48, 1 databases, 16 tables, 16 chunks
196
197           This column shows information about the amount of data dumped so
198           far, the amount of data left to dump, and an ETA ("estimated time
199           of arrival").  The ETA is a best-effort prediction when everything
200           will be finished dumping.  Sometimes the ETA is very accurate, but
201           at other times it can be significantly wrong.
202
203       The final line of the output is special: it summarizes all chunks (all
204       table chunks, tables and databases).
205
206       If you specify "--verbose" once, then the output includes "tbl" CHUNKS:
207
208         CHUNK  TIME  EXIT  SKIPPED DATABASE.TABLE
209           tbl  0.07     0        0 sakila.payment
210           tbl  0.08     0        0 sakila.rental
211           tbl  0.03     0        0 sakila.film
212            db  0.28     0        0 sakila
213           all  0.28     0        0 -
214
215       And if you specify "--verbose" twice in conjunction with
216       "--chunk-size", then the ouput includes the chunks:
217
218         CHUNK  TIME  EXIT  SKIPPED DATABASE.TABLE
219             0  0.03     0        0 sakila.payment
220             1  0.03     0        0 sakila.payment
221           tbl  0.10     0        0 sakila.payment
222             0  0.01     0        1 sakila.store
223           tbl  0.02     0        1 sakila.store
224            db  0.20     0        1 sakila
225           all  0.21     0        1 -
226
227       The output shows that "sakila.payment" was dumped in two chunks, and
228       "sakila.store" was dumped in one chunk that was skipped.
229

SPEED OF PARALLEL DUMPS

231       How much faster is it to dump in parallel?  That depends on your
232       hardware and data.  You may be able dump files twice as fast, or more
233       if you have lots of disks and CPUs.  At the time of writing, no
234       benchmarks exist for the current release.  User-contributed results for
235       older versions of mk-parallel-dump showed very good speedup depending
236       on the hardware.  Here are two links you can use as reference:
237
238       ·   http://www.paragon-cs.com/wordpress/?p=52 <http://www.paragon-
239           cs.com/wordpress/?p=52>
240
241       ·   <http://mituzas.lt/2009/02/03/mydumper/>
242

OPTIONS

244       "--lock-tables" and "--[no]flush-lock" are mutually exclusive.
245
246       --ask-pass
247           Prompt for a password when connecting to MySQL.
248
249       --base-dir
250           type: string
251
252           The base directory in which files will be stored.
253
254           The default is the current working directory.  Each database gets
255           its own directory under the base directory.  So if the base
256           directory is "/tmp" and database "foo" is dumped, then the
257           directory "/tmp/foo" is created which contains all the table dump
258           files for "foo".
259
260       --[no]biggest-first
261           default: yes
262
263           Process tables in descending order of size (biggest to smallest).
264
265           This strategy gives better parallelization.  Suppose there are 8
266           threads and the last table is huge.  We will finish everything else
267           and then be running single-threaded while that one finishes.  If
268           that one runs first, then we will have the max number of threads
269           running at a time for as long as possible.
270
271       --[no]bin-log-position
272           default: yes
273
274           Dump the master/slave position.
275
276           Dump binary log positions from both "SHOW MASTER STATUS" and "SHOW
277           SLAVE STATUS", whichever can be retrieved from the server.  The
278           data is dumped to a file named 00_master_data.sql in the
279           "--base-dir".
280
281           The file also contains details of each table dumped, including the
282           WHERE clauses used to dump it in chunks.
283
284       --charset
285           short form: -A; type: string
286
287           Default character set.  If the value is utf8, sets Perl's binmode
288           on STDOUT to utf8, passes the mysql_enable_utf8 option to
289           DBD::mysql, and runs SET NAMES UTF8 after connecting to MySQL.  Any
290           other value sets binmode on STDOUT without the utf8 layer, and runs
291           SET NAMES after connecting to MySQL.
292
293       --chunk-size
294           type: string
295
296           Number of rows or data size to dump per file.
297
298           Specifies that the table should be dumped in segments of
299           approximately the size given.  The syntax is either a plain
300           integer, which is interpreted as a number of rows per chunk, or an
301           integer with a suffix of G, M, or k, which is interpreted as the
302           size of the data to be dumped in each chunk.  See "CHUNKS" for more
303           details.
304
305       --client-side-buffering
306           Fetch and buffer results in memory on client.
307
308           By default this option is not enabled because it causes data to be
309           completely fetched from the server then buffered in-memory on the
310           client.  For large dumps this can require a lot of memory
311
312           Instead, the default (when this option is not specified) is to
313           fetch and dump rows one-by-one from the server.  This requires a
314           lot less memory on the client but can keep the tables on the server
315           locked longer.
316
317           Use this option only if you're sure that the data being dumped is
318           relatively small and the client has sufficient memory.  Remember
319           that, if this option is specified, all "--threads" will buffer
320           their results in-memory, so memory consumption can increase by a
321           factor of N "--threads".
322
323       --config
324           type: Array
325
326           Read this comma-separated list of config files; if specified, this
327           must be the first option on the command line.
328
329       --csv
330           Do "--tab" dump in CSV format (implies "--tab").
331
332           Changes "--tab" options so the dump file is in comma-separated
333           values (CSV) format.  The SELECT INTO OUTFILE statement looks like
334           the following, and can be re-loaded with the same options:
335
336              SELECT * INTO OUTFILE %D.%N.%6C.txt
337              FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '\"'
338              LINES TERMINATED BY '\n' FROM %D.%N;
339
340       --databases
341           short form: -d; type: hash
342
343           Dump only this comma-separated list of databases.
344
345       --databases-regex
346           type: string
347
348           Dump only databases whose names match this Perl regex.
349
350       --defaults-file
351           short form: -F; type: string
352
353           Only read mysql options from the given file.  You must give an
354           absolute pathname.
355
356       --dry-run
357           Print commands instead of executing them.
358
359       --engines
360           short form: -e; type: hash
361
362           Dump only tables that use this comma-separated list of storage
363           engines.
364
365       --[no]flush-lock
366           Use "FLUSH TABLES WITH READ LOCK".
367
368           This is enabled by default.  The lock is taken once, at the
369           beginning of the whole process and is released after all tables
370           have been dumped.  If you want to lock only the tables you're
371           dumping, use "--lock-tables".
372
373       --flush-log
374           Execute "FLUSH LOGS" when getting binlog positions.
375
376           This option is NOT enabled by default because it causes the MySQL
377           server to rotate its error log, potentially overwriting error
378           messages.
379
380       --[no]gzip
381           default: yes
382
383           Compress (gzip) SQL dump files; does not work with "--tab".
384
385           The IO::Compress::Gzip Perl module is used to compress SQL dump
386           files as they are written to disk.  The resulting dump files have a
387           ".gz" extension, like "table.000000.sql.gz".  They can be
388           uncompressed with gzip.  mk-parallel-restore will automatically
389           uncompress them, too, when restoring.
390
391           This option does not work with "--tab" because the MySQL server
392           writes the tab dump files directly using "SELECT INTO OUTFILE".
393
394       --help
395           Show help and exit.
396
397       --host
398           short form: -h; type: string
399
400           Connect to host.
401
402       --ignore-databases
403           type: Hash
404
405           Ignore this comma-separated list of databases.
406
407       --ignore-databases-regex
408           type: string
409
410           Ignore databases whose names match this Perl regex.
411
412       --ignore-engines
413           type: Hash; default: FEDERATED,MRG_MyISAM
414
415           Do not dump tables that use this comma-separated list of storage
416           engines.
417
418           The schema file will be dumped as usual.  This prevents dumping
419           data for Federated tables and Merge tables.
420
421       --ignore-tables
422           type: Hash
423
424           Ignore this comma-separated list of table names.
425
426           Table names may be qualified with the database name.
427
428       --ignore-tables-regex
429           type: string
430
431           Ignore tables whose names match the Perl regex.
432
433       --lock-tables
434           Use "LOCK TABLES" (disables "--[no]flush-lock").
435
436           Disables "--[no]flush-lock" (unless it was explicitly set) and
437           locks tables with "LOCK TABLES READ".  The lock is taken and
438           released for every table as it is dumped.
439
440       --lossless-floats
441           Dump float types with extra precision for lossless restore
442           (requires "--tab").
443
444           Wraps these types with a call to "FORMAT()" with 17 digits of
445           precision.  According to the comments in Google's patches, this
446           will give lossless dumping and reloading in most cases.  (I
447           shamelessly stole this technique from them.  I don't know enough
448           about floating-point math to have an opinion).
449
450           This works only with "--tab".
451
452       --password
453           short form: -p; type: string
454
455           Password to use when connecting.
456
457       --pid
458           type: string
459
460           Create the given PID file.  The file contains the process ID of the
461           script.  The PID file is removed when the script exits.  Before
462           starting, the script checks if the PID file already exists.  If it
463           does not, then the script creates and writes its own PID to it.  If
464           it does, then the script checks the following: if the file contains
465           a PID and a process is running with that PID, then the script dies;
466           or, if there is no process running with that PID, then the script
467           overwrites the file with its own PID and starts; else, if the file
468           contains no PID, then the script dies.
469
470       --port
471           short form: -P; type: int
472
473           Port number to use for connection.
474
475       --progress
476           Display progress reports.
477
478           Progress is displayed each time a table or chunk of a table
479           finishes dumping.  Progress is calculated by measuring the average
480           data size of each full chunk and assuming all bytes are created
481           equal.  The output is the completed and total bytes, the percent
482           completed, estimated time remaining, and estimated completion time.
483           For example:
484
485             40.72k/112.00k  36.36% ETA 00:00 (2009-10-27T19:17:53)
486
487           If "--chunk-size" is not specified then each table is effectively
488           one big chunk and the progress reports are pretty accurate.  When
489           "--chunk-size" is specified the progress reports can be skewed
490           because of averaging.
491
492           Progress reports are inaccurate when a dump is resumed.  This is
493           known issue and will be fixed in a later release.
494
495       --quiet
496           short form: -q
497
498           Quiet output; disables "--verbose".
499
500       --[no]resume
501           default: yes
502
503           Resume dumps.
504
505       --set-vars
506           type: string; default: wait_timeout=10000
507
508           Set these MySQL variables.  Immediately after connecting to MySQL,
509           this string will be appended to SET and executed.
510
511       --socket
512           short form: -S; type: string
513
514           Socket file to use for connection.
515
516       --stop-slave
517           Issue "STOP SLAVE" on server before dumping data.
518
519           This ensures that the data is not changing during the dump.  Issues
520           "START SLAVE" after the dump is complete.
521
522           If the slave is not running, throws an error and exits.  This is to
523           prevent possibly bad things from happening if the slave is not
524           running because of a problem, or because someone intentionally
525           stopped the slave for maintenance or some other purpose.
526
527       --tab
528           Dump tab-separated (sets "--umask" 0).
529
530           Dump via "SELECT INTO OUTFILE", which is similar to what
531           "mysqldump" does with the "--tab" option, but you're not
532           constrained to a single database at a time.
533
534           Before you use this option, make sure you know what "SELECT INTO
535           OUTFILE" does!  I recommend using it only if you're running mk-
536           parallel-dump on the same machine as the MySQL server, but there is
537           no protection if you don't.
538
539           This option sets "--umask" to zero so auto-created directories are
540           writable by the MySQL server.
541
542       --tables
543           short form: -t; type: hash
544
545           Dump only this comma-separated list of table names.
546
547           Table names may be qualified with the database name.
548
549       --tables-regex
550           type: string
551
552           Dump only tables whose names match this Perl regex.
553
554       --threads
555           type: int; default: 2
556
557           Number of threads to dump concurrently.
558
559           Specifies the number of parallel processes to run.  The default is
560           2 (this is mk-parallel-dump, after all -- 1 is not parallel).  On
561           GNU/Linux machines, the default is the number of times 'processor'
562           appears in /proc/cpuinfo.  On Windows, the default is read from the
563           environment.  In any case, the default is at least 2, even when
564           there's only a single processor.
565
566       --[no]tz-utc
567           default: yes
568
569           Enable TIMESTAMP columns to be dumped and reloaded between
570           different time zones.  mk-parallel-dump sets its connection time
571           zone to UTC and adds "SET TIME_ZONE='+00:00'" to the dump file.
572           Without this option, TIMESTAMP columns are dumped and reloaded in
573           the time zones local to the source and destination servers, which
574           can cause the values to change.  This option also protects against
575           changes due to daylight saving time.
576
577           This option is identical to "mysqldump --tz-utc".  In fact, the
578           above text was copied from mysqldump's man page.
579
580       --umask
581           type: string
582
583           Set the program's "umask" to this octal value.
584
585           This is useful when you want created files and directories to be
586           readable or writable by other users (for example, the MySQL server
587           itself).
588
589       --user
590           short form: -u; type: string
591
592           User for login if not current user.
593
594       --verbose
595           short form: -v; cumulative: yes
596
597           Be verbose; can specify multiple times.
598
599           See "OUTPUT".
600
601       --version
602           Show version and exit.
603
604       --wait
605           short form: -w; type: time; default: 5m
606
607           Wait limit when the server is down.
608
609           If the MySQL server crashes during dumping, waits until the server
610           comes back and then continues with the rest of the tables.
611           "mk-parallel-dump" will check the server every second until this
612           time is exhausted, at which point it will give up and exit.
613
614           This implements Peter Zaitsev's "safe dump" request: sometimes a
615           dump on a server that has corrupt data will kill the server.  mk-
616           parallel-dump will wait for the server to restart, then keep going.
617           It's hard to say which table killed the server, so no tables will
618           be retried.  Tables that were being concurrently dumped when the
619           crash happened will not be retried.  No additional locks will be
620           taken after the server restarts; it's assumed this behavior is
621           useful only on a server you're not trying to dump while it's in
622           production.
623
624       --[no]zero-chunk
625           default: yes
626
627           Add a chunk for rows with zero or zero-equivalent values.  The only
628           has an effect when "--chunk-size" is specified.  The purpose of the
629           zero chunk is to capture a potentially large number of zero values
630           that would imbalance the size of the first chunk.  For example, if
631           a lot of negative numbers were inserted into an unsigned integer
632           column causing them to be stored as zeros, then these zero values
633           are captured by the zero chunk instead of the first chunk and all
634           its non-zero values.
635

DSN OPTIONS

637       These DSN options are used to create a DSN.  Each option is given like
638       "option=value".  The options are case-sensitive, so P and p are not the
639       same option.  There cannot be whitespace before or after the "=" and if
640       the value contains whitespace it must be quoted.  DSN options are
641       comma-separated.  See the maatkit manpage for full details.
642
643       ·   A
644
645           dsn: charset; copy: yes
646
647           Default character set.
648
649       ·   D
650
651           dsn: database; copy: yes
652
653           Default database.
654
655       ·   F
656
657           dsn: mysql_read_default_file; copy: yes
658
659           Only read default options from the given file
660
661       ·   h
662
663           dsn: host; copy: yes
664
665           Connect to host.
666
667       ·   p
668
669           dsn: password; copy: yes
670
671           Password to use when connecting.
672
673       ·   P
674
675           dsn: port; copy: yes
676
677           Port number to use for connection.
678
679       ·   S
680
681           dsn: mysql_socket; copy: yes
682
683           Socket file to use for connection.
684
685       ·   u
686
687           dsn: user; copy: yes
688
689           User for login if not current user.
690

DOWNLOADING

692       You can download Maatkit from Google Code at
693       <http://code.google.com/p/maatkit/>, or you can get any of the tools
694       easily with a command like the following:
695
696          wget http://www.maatkit.org/get/toolname
697          or
698          wget http://www.maatkit.org/trunk/toolname
699
700       Where "toolname" can be replaced with the name (or fragment of a name)
701       of any of the Maatkit tools.  Once downloaded, they're ready to run; no
702       installation is needed.  The first URL gets the latest released version
703       of the tool, and the second gets the latest trunk code from Subversion.
704

ENVIRONMENT

706       The environment variable "MKDEBUG" enables verbose debugging output in
707       all of the Maatkit tools:
708
709          MKDEBUG=1 mk-....
710

SYSTEM REQUIREMENTS

712       You need Perl, DBI, DBD::mysql, and some core packages that ought to be
713       installed in any reasonably new version of Perl.
714
715       This program works best on GNU/Linux.  Filename quoting might not work
716       well on Microsoft Windows if you have spaces or funny characters in
717       your database or table names.
718

BUGS

720       For a list of known bugs see
721       http://www.maatkit.org/bugs/mk-parallel-dump
722       <http://www.maatkit.org/bugs/mk-parallel-dump>.
723
724       Please use Google Code Issues and Groups to report bugs or request
725       support: <http://code.google.com/p/maatkit/>.  You can also join
726       #maatkit on Freenode to discuss Maatkit.
727
728       Please include the complete command-line used to reproduce the problem
729       you are seeing, the version of all MySQL servers involved, the complete
730       output of the tool when run with "--version", and if possible,
731       debugging output produced by running with the "MKDEBUG=1" environment
732       variable.
733

COPYRIGHT, LICENSE AND WARRANTY

735       This program is copyright 2007-2010 Baron Schwartz.  Feedback and
736       improvements are welcome.
737
738       THIS PROGRAM IS PROVIDED "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED
739       WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF
740       MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
741
742       This program is free software; you can redistribute it and/or modify it
743       under the terms of the GNU General Public License as published by the
744       Free Software Foundation, version 2; OR the Perl Artistic License.  On
745       UNIX and similar systems, you can issue `man perlgpl' or `man
746       perlartistic' to read these licenses.
747
748       You should have received a copy of the GNU General Public License along
749       with this program; if not, write to the Free Software Foundation, Inc.,
750       59 Temple Place, Suite 330, Boston, MA  02111-1307  USA.
751

SEE ALSO

753       See also mk-parallel-restore.
754

AUTHOR

756       Baron Schwartz
757

ABOUT MAATKIT

759       This tool is part of Maatkit, a toolkit for power users of MySQL.
760       Maatkit was created by Baron Schwartz; Baron and Daniel Nichter are the
761       primary code contributors.  Both are employed by Percona.  Financial
762       support for Maatkit development is primarily provided by Percona and
763       its clients.
764

VERSION

766       This manual page documents Ver 1.0.26 Distrib 6839 $Revision: 6831 $.
767
768
769
770perl v5.12.1                      2010-08-01               MK-PARALLEL-DUMP(1)
Impressum