1MK-PARALLEL-DUMP(1)   User Contributed Perl Documentation  MK-PARALLEL-DUMP(1)
2
3
4

NAME

6       mk-parallel-dump - (DEPRECATED) Dump MySQL tables in parallel.
7

SYNOPSIS

9       This tool is deprecated because after several complete redesigns, we
10       concluded that Perl is the wrong technology for this task.  Read
11       "RISKS" before you use it, please.  It remains useful for some people
12       who we know aren't depending on it in production, and therefore we are
13       not removing it from the distribution.
14
15       Usage: mk-parallel-dump [OPTION...] [DSN]
16
17       mk-parallel-dump dumps MySQL tables in parallel to make some data
18       loading operations more convenient.  IT IS NOT A BACKUP TOOL!
19
20       Dump all databases and tables to the current directory:
21
22         mk-parallel-dump
23
24       Dump all databases and tables via SELECT INTO OUTFILE to /tmp/dumps:
25
26         mk-parallel-dump --tab --base-dir /tmp/dumps
27
28       Dump only table db.foo in chunks of ten thousand rows using 8 threads:
29
30         mk-parallel-dump --databases db --tables foo \
31            --chunk-size 10000 --threads 8
32
33       Dump tables in chunks of approximately 10kb of data (not ten thousand
34       rows!):
35
36         mk-parallel-dump --chunk-size 10k
37

RISKS

39       The following section is included to inform users about the potential
40       risks, whether known or unknown, of using this tool.  The two main
41       categories of risks are those created by the nature of the tool (e.g.
42       read-only tools vs. read-write tools) and those created by bugs.
43
44       mk-parallel-dump is not a backup program!  It is only designed for fast
45       data exports, for purposes such as quickly loading data into test
46       systems.  Do not use mk-parallel-dump for backups.
47
48       At the time of this release there is a bug that prevents
49       "--lock-tables" from working correctly, an unconfirmed bug that
50       prevents the tool from finishing, a bug that causes the wrong character
51       set to be used, and a bug replacing default values.
52
53       The authoritative source for updated information is always the online
54       issue tracking system.  Issues that affect this tool will be marked as
55       such.  You can see a list of such issues at the following URL:
56       <http://www.maatkit.org/bugs/mk-parallel-dump>.
57
58       See also "BUGS" for more information on filing bugs and getting help.
59

DESCRIPTION

61       mk-parallel-dump connects to a MySQL server, finds database and table
62       names, and dumps them in parallel for speed.  Only tables and data are
63       dumped; view definitions or any kind of stored code (triggers, events,
64       routines, procedures, etc.) are not dumped.  However, if you dump the
65       "mysql" database, you'll be dumping the stored routines anyway.
66
67       Exit status is 0 if everything went well, 1 if any chunks failed, and
68       any other value indicates an internal error.
69
70       To dump all tables to uncompressed text files in the current directory,
71       each database with its own directory, with a global read lock, flushing
72       and recording binary log positions, each table in a single file:
73
74         mk-parallel-dump
75
76       To dump tables elsewhere:
77
78         mk-parallel-dump --base-dir /path/to/elsewhere
79
80       To dump to tab-separated files with "SELECT INTO OUTFILE", each table
81       with separate data and SQL files:
82
83         mk-parallel-dump --tab
84
85       mk-parallel-dump doesn't clean out any destination directories before
86       dumping into them.  You can move away the old destination, then remove
87       it after a successful dump, with a shell script like the following:
88
89          #!/bin/sh
90          CNT=`ls | grep -c old`;
91          if [ -d default ]; then mv default default.old.$CNT;
92          mk-parallel-dump
93          if [ $? != 0 ]
94          then
95             echo "There were errors, not purging old sets."
96          else
97             echo "No errors during dump, purging old sets."
98             rm -rf default.old.*
99          fi
100
101       mk-parallel-dump checks whether files have been created before dumping.
102       If the file has been created, it skips the table or chunk that would
103       have created the file.  This makes it possible to resume dumps.  If you
104       don't want this behavior, and instead you want a full dump, then move
105       away the old files or specify "--[no]resume".
106

CHUNKS

108       mk-parallel-dump can break your tables into chunks when dumping, and
109       put approximately the amount of data you specify into each chunk.  This
110       is useful for two reasons:
111
112       •   A table that is dumped in chunks can be dumped in many threads
113           simultaneously.
114
115       •   Dumping in chunks creates small files, which can be imported more
116           efficiently and safely.  Importing a single huge file can be a lot
117           of extra work for transactional storage engines like InnoDB.  A
118           huge file can create a huge rollback segment in your tablespace.
119           If the import fails, the rollback can take a very long time.
120
121       To dump in chunks, specify the "--chunk-size" option.  This option is
122       an integer with an optional suffix.  Without the suffix, it's the
123       number of rows you want in each chunk.  With the suffix, it's the
124       approximate size of the data.
125
126       mk-parallel-dump tries to use index statistics to calculate where the
127       boundaries between chunks should be.  If the values are not evenly
128       distributed, some chunks can have a lot of rows, and others may have
129       very few or even none.  Some chunks can exceed the size you want.
130
131       When you specify the size with a suffix, the allowed suffixes are k, M
132       and G, for kibibytes, mebibytes, and gibibytes, respectively.  mk-
133       parallel-dump doesn't know anything about data size.  It asks MySQL
134       (via "SHOW TABLE STATUS") how long an average row is in the table, and
135       converts your option to a number of rows.
136
137       Not all tables can be broken into chunks.  mk-parallel-dump looks for
138       an index whose leading column is numeric (integers, real numbers, and
139       date and time types).  It prefers the primary key if its first column
140       is chunk-able.  Otherwise it chooses the first chunk-able column in the
141       table.
142
143       Generating a series of "WHERE" clauses to divide a table into evenly-
144       sized chunks is difficult.  If you have any ideas on how to improve the
145       algorithm, please write to the author (see "BUGS").
146

OUTPUT

148       Output depends on "--verbose", "--progress", "--dry-run" and "--quiet".
149       If "--dry-run" is specified mk-parallel-dump prints the commands or SQL
150       statements that it would use to dump data but it does not actually dump
151       any data.  If "--quiet" is specified there is no output; this overrides
152       all other options that affect the output.
153
154       The default output is something like the following example:
155
156         CHUNK  TIME  EXIT  SKIPPED DATABASE.TABLE
157            db  0.28     0        0 sakila
158           all  0.28     0        0 -
159
160       CHUNK
161           The CHUNK column signifies what kind of information is in the line:
162
163             Value  Meaning
164             =====  ========================================================
165             db     This line contains summary information about a database.
166             tbl    This line contains summary information about a table.
167             <int>  This line contains information about the Nth chunk of a
168                    table.
169
170           The types of lines you'll see depend on the "--chunk-size" option
171           and "--verbose" options.  mk-parallel-dump treats everything as a
172           chunk.  If you don't specify "--chunk-size", then each table is one
173           big chunk and each database is a chunk (of all its tables).  Thus,
174           there is output for numbered table chunks ("--chunk-size"), table
175           chunks, and database chunks.
176
177       TIME
178           The TIME column shows the wallclock time elapsed while the chunk
179           was dumped.  If CHUNK is "db" or "tbl", this time is the total
180           wallclock time elapsed for the database or table.
181
182       EXIT
183           The EXIT column shows the exit status of the chunk.  Any non-zero
184           exit signifies an error.  The cause of errors are usually printed
185           to STDERR.
186
187       SKIPPED
188           The SKIPPED column shows how many chunks were skipped.  These are
189           not errors.  Chunks are skipped if the dump can be resumed.  See
190           "--[no]resume".
191
192       DATABASE.TABLE
193           The DATABASE.TABLE column shows to which table the chunk belongs.
194           For "db" chunks, this shows just the database.  Chunks are printed
195           when they complete, and this is often out of the order you'd
196           expect.  For example, you might see a chunk for db1.table_1, then a
197           chunk for db2.table_2, then another chunk for db1.table_1, then the
198           "db" chunk summary for db2.
199
200       PROGRESS
201           If you specify "--progress", then the tool adds a PROGRESS column
202           after DATABASE.TABLE, which contains text similar to the following:
203
204             PROGRESS
205             4.10M/4.10M 100.00% ETA ... 00:00 (2009-10-16T15:37:49)
206             done at 2009-10-16T15:37:48, 1 databases, 16 tables, 16 chunks
207
208           This column shows information about the amount of data dumped so
209           far, the amount of data left to dump, and an ETA ("estimated time
210           of arrival").  The ETA is a best-effort prediction when everything
211           will be finished dumping.  Sometimes the ETA is very accurate, but
212           at other times it can be significantly wrong.
213
214       The final line of the output is special: it summarizes all chunks (all
215       table chunks, tables and databases).
216
217       If you specify "--verbose" once, then the output includes "tbl" CHUNKS:
218
219         CHUNK  TIME  EXIT  SKIPPED DATABASE.TABLE
220           tbl  0.07     0        0 sakila.payment
221           tbl  0.08     0        0 sakila.rental
222           tbl  0.03     0        0 sakila.film
223            db  0.28     0        0 sakila
224           all  0.28     0        0 -
225
226       And if you specify "--verbose" twice in conjunction with
227       "--chunk-size", then the output includes the chunks:
228
229         CHUNK  TIME  EXIT  SKIPPED DATABASE.TABLE
230             0  0.03     0        0 sakila.payment
231             1  0.03     0        0 sakila.payment
232           tbl  0.10     0        0 sakila.payment
233             0  0.01     0        1 sakila.store
234           tbl  0.02     0        1 sakila.store
235            db  0.20     0        1 sakila
236           all  0.21     0        1 -
237
238       The output shows that "sakila.payment" was dumped in two chunks, and
239       "sakila.store" was dumped in one chunk that was skipped.
240

SPEED OF PARALLEL DUMPS

242       How much faster is it to dump in parallel?  That depends on your
243       hardware and data.  You may be able dump files twice as fast, or more
244       if you have lots of disks and CPUs.  At the time of writing, no
245       benchmarks exist for the current release.  User-contributed results for
246       older versions of mk-parallel-dump showed very good speedup depending
247       on the hardware.  Here are two links you can use as reference:
248
249       •   <http://www.paragon-cs.com/wordpress/?p=52>
250
251       •   <http://mituzas.lt/2009/02/03/mydumper/>
252

OPTIONS

254       "--lock-tables" and "--[no]flush-lock" are mutually exclusive.
255
256       This tool accepts additional command-line arguments.  Refer to the
257       "SYNOPSIS" and usage information for details.
258
259       --ask-pass
260           Prompt for a password when connecting to MySQL.
261
262       --base-dir
263           type: string
264
265           The base directory in which files will be stored.
266
267           The default is the current working directory.  Each database gets
268           its own directory under the base directory.  So if the base
269           directory is "/tmp" and database "foo" is dumped, then the
270           directory "/tmp/foo" is created which contains all the table dump
271           files for "foo".
272
273       --[no]biggest-first
274           default: yes
275
276           Process tables in descending order of size (biggest to smallest).
277
278           This strategy gives better parallelization.  Suppose there are 8
279           threads and the last table is huge.  We will finish everything else
280           and then be running single-threaded while that one finishes.  If
281           that one runs first, then we will have the max number of threads
282           running at a time for as long as possible.
283
284       --[no]bin-log-position
285           default: yes
286
287           Dump the master/slave position.
288
289           Dump binary log positions from both "SHOW MASTER STATUS" and "SHOW
290           SLAVE STATUS", whichever can be retrieved from the server.  The
291           data is dumped to a file named 00_master_data.sql in the
292           "--base-dir".
293
294           The file also contains details of each table dumped, including the
295           WHERE clauses used to dump it in chunks.
296
297       --charset
298           short form: -A; type: string
299
300           Default character set.  If the value is utf8, sets Perl's binmode
301           on STDOUT to utf8, passes the mysql_enable_utf8 option to
302           DBD::mysql, and runs SET NAMES UTF8 after connecting to MySQL.  Any
303           other value sets binmode on STDOUT without the utf8 layer, and runs
304           SET NAMES after connecting to MySQL.
305
306       --chunk-size
307           type: string
308
309           Number of rows or data size to dump per file.
310
311           Specifies that the table should be dumped in segments of
312           approximately the size given.  The syntax is either a plain
313           integer, which is interpreted as a number of rows per chunk, or an
314           integer with a suffix of G, M, or k, which is interpreted as the
315           size of the data to be dumped in each chunk.  See "CHUNKS" for more
316           details.
317
318       --client-side-buffering
319           Fetch and buffer results in memory on client.
320
321           By default this option is not enabled because it causes data to be
322           completely fetched from the server then buffered in-memory on the
323           client.  For large dumps this can require a lot of memory
324
325           Instead, the default (when this option is not specified) is to
326           fetch and dump rows one-by-one from the server.  This requires a
327           lot less memory on the client but can keep the tables on the server
328           locked longer.
329
330           Use this option only if you're sure that the data being dumped is
331           relatively small and the client has sufficient memory.  Remember
332           that, if this option is specified, all "--threads" will buffer
333           their results in-memory, so memory consumption can increase by a
334           factor of N "--threads".
335
336       --config
337           type: Array
338
339           Read this comma-separated list of config files; if specified, this
340           must be the first option on the command line.
341
342       --csv
343           Do "--tab" dump in CSV format (implies "--tab").
344
345           Changes "--tab" options so the dump file is in comma-separated
346           values (CSV) format.  The SELECT INTO OUTFILE statement looks like
347           the following, and can be re-loaded with the same options:
348
349              SELECT * INTO OUTFILE %D.%N.%6C.txt
350              FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '\"'
351              LINES TERMINATED BY '\n' FROM %D.%N;
352
353       --databases
354           short form: -d; type: hash
355
356           Dump only this comma-separated list of databases.
357
358       --databases-regex
359           type: string
360
361           Dump only databases whose names match this Perl regex.
362
363       --defaults-file
364           short form: -F; type: string
365
366           Only read mysql options from the given file.  You must give an
367           absolute pathname.
368
369       --dry-run
370           Print commands instead of executing them.
371
372       --engines
373           short form: -e; type: hash
374
375           Dump only tables that use this comma-separated list of storage
376           engines.
377
378       --[no]flush-lock
379           Use "FLUSH TABLES WITH READ LOCK".
380
381           This is enabled by default.  The lock is taken once, at the
382           beginning of the whole process and is released after all tables
383           have been dumped.  If you want to lock only the tables you're
384           dumping, use "--lock-tables".
385
386       --flush-log
387           Execute "FLUSH LOGS" when getting binlog positions.
388
389           This option is NOT enabled by default because it causes the MySQL
390           server to rotate its error log, potentially overwriting error
391           messages.
392
393       --[no]gzip
394           default: yes
395
396           Compress (gzip) SQL dump files; does not work with "--tab".
397
398           The IO::Compress::Gzip Perl module is used to compress SQL dump
399           files as they are written to disk.  The resulting dump files have a
400           ".gz" extension, like "table.000000.sql.gz".  They can be
401           uncompressed with gzip.  mk-parallel-restore will automatically
402           uncompress them, too, when restoring.
403
404           This option does not work with "--tab" because the MySQL server
405           writes the tab dump files directly using "SELECT INTO OUTFILE".
406
407       --help
408           Show help and exit.
409
410       --host
411           short form: -h; type: string
412
413           Connect to host.
414
415       --ignore-databases
416           type: Hash
417
418           Ignore this comma-separated list of databases.
419
420       --ignore-databases-regex
421           type: string
422
423           Ignore databases whose names match this Perl regex.
424
425       --ignore-engines
426           type: Hash; default: FEDERATED,MRG_MyISAM
427
428           Do not dump tables that use this comma-separated list of storage
429           engines.
430
431           The schema file will be dumped as usual.  This prevents dumping
432           data for Federated tables and Merge tables.
433
434       --ignore-tables
435           type: Hash
436
437           Ignore this comma-separated list of table names.
438
439           Table names may be qualified with the database name.
440
441       --ignore-tables-regex
442           type: string
443
444           Ignore tables whose names match the Perl regex.
445
446       --lock-tables
447           Use "LOCK TABLES" (disables "--[no]flush-lock").
448
449           Disables "--[no]flush-lock" (unless it was explicitly set) and
450           locks tables with "LOCK TABLES READ".  The lock is taken and
451           released for every table as it is dumped.
452
453       --lossless-floats
454           Dump float types with extra precision for lossless restore
455           (requires "--tab").
456
457           Wraps these types with a call to "FORMAT()" with 17 digits of
458           precision.  According to the comments in Google's patches, this
459           will give lossless dumping and reloading in most cases.  (I
460           shamelessly stole this technique from them.  I don't know enough
461           about floating-point math to have an opinion).
462
463           This works only with "--tab".
464
465       --password
466           short form: -p; type: string
467
468           Password to use when connecting.
469
470       --pid
471           type: string
472
473           Create the given PID file.  The file contains the process ID of the
474           script.  The PID file is removed when the script exits.  Before
475           starting, the script checks if the PID file already exists.  If it
476           does not, then the script creates and writes its own PID to it.  If
477           it does, then the script checks the following: if the file contains
478           a PID and a process is running with that PID, then the script dies;
479           or, if there is no process running with that PID, then the script
480           overwrites the file with its own PID and starts; else, if the file
481           contains no PID, then the script dies.
482
483       --port
484           short form: -P; type: int
485
486           Port number to use for connection.
487
488       --progress
489           Display progress reports.
490
491           Progress is displayed each time a table or chunk of a table
492           finishes dumping.  Progress is calculated by measuring the average
493           data size of each full chunk and assuming all bytes are created
494           equal.  The output is the completed and total bytes, the percent
495           completed, estimated time remaining, and estimated completion time.
496           For example:
497
498             40.72k/112.00k  36.36% ETA 00:00 (2009-10-27T19:17:53)
499
500           If "--chunk-size" is not specified then each table is effectively
501           one big chunk and the progress reports are pretty accurate.  When
502           "--chunk-size" is specified the progress reports can be skewed
503           because of averaging.
504
505           Progress reports are inaccurate when a dump is resumed.  This is
506           known issue and will be fixed in a later release.
507
508       --quiet
509           short form: -q
510
511           Quiet output; disables "--verbose".
512
513       --[no]resume
514           default: yes
515
516           Resume dumps.
517
518       --set-vars
519           type: string; default: wait_timeout=10000
520
521           Set these MySQL variables.  Immediately after connecting to MySQL,
522           this string will be appended to SET and executed.
523
524       --socket
525           short form: -S; type: string
526
527           Socket file to use for connection.
528
529       --stop-slave
530           Issue "STOP SLAVE" on server before dumping data.
531
532           This ensures that the data is not changing during the dump.  Issues
533           "START SLAVE" after the dump is complete.
534
535           If the slave is not running, throws an error and exits.  This is to
536           prevent possibly bad things from happening if the slave is not
537           running because of a problem, or because someone intentionally
538           stopped the slave for maintenance or some other purpose.
539
540       --tab
541           Dump tab-separated (sets "--umask" 0).
542
543           Dump via "SELECT INTO OUTFILE", which is similar to what
544           "mysqldump" does with the "--tab" option, but you're not
545           constrained to a single database at a time.
546
547           Before you use this option, make sure you know what "SELECT INTO
548           OUTFILE" does!  I recommend using it only if you're running mk-
549           parallel-dump on the same machine as the MySQL server, but there is
550           no protection if you don't.
551
552           This option sets "--umask" to zero so auto-created directories are
553           writable by the MySQL server.
554
555       --tables
556           short form: -t; type: hash
557
558           Dump only this comma-separated list of table names.
559
560           Table names may be qualified with the database name.
561
562       --tables-regex
563           type: string
564
565           Dump only tables whose names match this Perl regex.
566
567       --threads
568           type: int; default: 2
569
570           Number of threads to dump concurrently.
571
572           Specifies the number of parallel processes to run.  The default is
573           2 (this is mk-parallel-dump, after all -- 1 is not parallel).  On
574           GNU/Linux machines, the default is the number of times 'processor'
575           appears in /proc/cpuinfo.  On Windows, the default is read from the
576           environment.  In any case, the default is at least 2, even when
577           there's only a single processor.
578
579       --[no]tz-utc
580           default: yes
581
582           Enable TIMESTAMP columns to be dumped and reloaded between
583           different time zones.  mk-parallel-dump sets its connection time
584           zone to UTC and adds "SET TIME_ZONE='+00:00'" to the dump file.
585           Without this option, TIMESTAMP columns are dumped and reloaded in
586           the time zones local to the source and destination servers, which
587           can cause the values to change.  This option also protects against
588           changes due to daylight saving time.
589
590           This option is identical to "mysqldump --tz-utc".  In fact, the
591           above text was copied from mysqldump's man page.
592
593       --umask
594           type: string
595
596           Set the program's "umask" to this octal value.
597
598           This is useful when you want created files and directories to be
599           readable or writable by other users (for example, the MySQL server
600           itself).
601
602       --user
603           short form: -u; type: string
604
605           User for login if not current user.
606
607       --verbose
608           short form: -v; cumulative: yes
609
610           Be verbose; can specify multiple times.
611
612           See "OUTPUT".
613
614       --version
615           Show version and exit.
616
617       --wait
618           short form: -w; type: time; default: 5m
619
620           Wait limit when the server is down.
621
622           If the MySQL server crashes during dumping, waits until the server
623           comes back and then continues with the rest of the tables.
624           "mk-parallel-dump" will check the server every second until this
625           time is exhausted, at which point it will give up and exit.
626
627           This implements Peter Zaitsev's "safe dump" request: sometimes a
628           dump on a server that has corrupt data will kill the server.  mk-
629           parallel-dump will wait for the server to restart, then keep going.
630           It's hard to say which table killed the server, so no tables will
631           be retried.  Tables that were being concurrently dumped when the
632           crash happened will not be retried.  No additional locks will be
633           taken after the server restarts; it's assumed this behavior is
634           useful only on a server you're not trying to dump while it's in
635           production.
636
637       --[no]zero-chunk
638           default: yes
639
640           Add a chunk for rows with zero or zero-equivalent values.  The only
641           has an effect when "--chunk-size" is specified.  The purpose of the
642           zero chunk is to capture a potentially large number of zero values
643           that would imbalance the size of the first chunk.  For example, if
644           a lot of negative numbers were inserted into an unsigned integer
645           column causing them to be stored as zeros, then these zero values
646           are captured by the zero chunk instead of the first chunk and all
647           its non-zero values.
648

DSN OPTIONS

650       These DSN options are used to create a DSN.  Each option is given like
651       "option=value".  The options are case-sensitive, so P and p are not the
652       same option.  There cannot be whitespace before or after the "=" and if
653       the value contains whitespace it must be quoted.  DSN options are
654       comma-separated.  See the maatkit manpage for full details.
655
656       •   A
657
658           dsn: charset; copy: yes
659
660           Default character set.
661
662       •   D
663
664           dsn: database; copy: yes
665
666           Default database.
667
668       •   F
669
670           dsn: mysql_read_default_file; copy: yes
671
672           Only read default options from the given file
673
674       •   h
675
676           dsn: host; copy: yes
677
678           Connect to host.
679
680       •   p
681
682           dsn: password; copy: yes
683
684           Password to use when connecting.
685
686       •   P
687
688           dsn: port; copy: yes
689
690           Port number to use for connection.
691
692       •   S
693
694           dsn: mysql_socket; copy: yes
695
696           Socket file to use for connection.
697
698       •   u
699
700           dsn: user; copy: yes
701
702           User for login if not current user.
703

DOWNLOADING

705       You can download Maatkit from Google Code at
706       <http://code.google.com/p/maatkit/>, or you can get any of the tools
707       easily with a command like the following:
708
709          wget http://www.maatkit.org/get/toolname
710          or
711          wget http://www.maatkit.org/trunk/toolname
712
713       Where "toolname" can be replaced with the name (or fragment of a name)
714       of any of the Maatkit tools.  Once downloaded, they're ready to run; no
715       installation is needed.  The first URL gets the latest released version
716       of the tool, and the second gets the latest trunk code from Subversion.
717

ENVIRONMENT

719       The environment variable "MKDEBUG" enables verbose debugging output in
720       all of the Maatkit tools:
721
722          MKDEBUG=1 mk-....
723

SYSTEM REQUIREMENTS

725       You need Perl, DBI, DBD::mysql, and some core packages that ought to be
726       installed in any reasonably new version of Perl.
727
728       This program works best on GNU/Linux.  Filename quoting might not work
729       well on Microsoft Windows if you have spaces or funny characters in
730       your database or table names.
731

BUGS

733       For a list of known bugs see
734       <http://www.maatkit.org/bugs/mk-parallel-dump>.
735
736       Please use Google Code Issues and Groups to report bugs or request
737       support: <http://code.google.com/p/maatkit/>.  You can also join
738       #maatkit on Freenode to discuss Maatkit.
739
740       Please include the complete command-line used to reproduce the problem
741       you are seeing, the version of all MySQL servers involved, the complete
742       output of the tool when run with "--version", and if possible,
743       debugging output produced by running with the "MKDEBUG=1" environment
744       variable.
745

COPYRIGHT, LICENSE AND WARRANTY

747       This program is copyright 2007-2011 Baron Schwartz.  Feedback and
748       improvements are welcome.
749
750       THIS PROGRAM IS PROVIDED "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED
751       WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF
752       MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.
753
754       This program is free software; you can redistribute it and/or modify it
755       under the terms of the GNU General Public License as published by the
756       Free Software Foundation, version 2; OR the Perl Artistic License.  On
757       UNIX and similar systems, you can issue `man perlgpl' or `man
758       perlartistic' to read these licenses.
759
760       You should have received a copy of the GNU General Public License along
761       with this program; if not, write to the Free Software Foundation, Inc.,
762       59 Temple Place, Suite 330, Boston, MA  02111-1307  USA.
763

SEE ALSO

765       See also mk-parallel-restore.
766

AUTHOR

768       Baron Schwartz
769

ABOUT MAATKIT

771       This tool is part of Maatkit, a toolkit for power users of MySQL.
772       Maatkit was created by Baron Schwartz; Baron and Daniel Nichter are the
773       primary code contributors.  Both are employed by Percona.  Financial
774       support for Maatkit development is primarily provided by Percona and
775       its clients.
776

VERSION

778       This manual page documents Ver 1.0.28 Distrib 7540 $Revision: 7460 $.
779
780
781
782perl v5.34.0                      2021-07-22               MK-PARALLEL-DUMP(1)
Impressum