1PG_DUMP(1)               PostgreSQL 11.3 Documentation              PG_DUMP(1)
2
3
4

NAME

6       pg_dump - extract a PostgreSQL database into a script file or other
7       archive file
8

SYNOPSIS

10       pg_dump [connection-option...] [option...] [dbname]
11

DESCRIPTION

13       pg_dump is a utility for backing up a PostgreSQL database. It makes
14       consistent backups even if the database is being used concurrently.
15       pg_dump does not block other users accessing the database (readers or
16       writers).
17
18       pg_dump only dumps a single database. To back up an entire cluster, or
19       to back up global objects that are common to all databases in a cluster
20       (such as roles and tablespaces), use pg_dumpall(1).
21
22       Dumps can be output in script or archive file formats. Script dumps are
23       plain-text files containing the SQL commands required to reconstruct
24       the database to the state it was in at the time it was saved. To
25       restore from such a script, feed it to psql(1). Script files can be
26       used to reconstruct the database even on other machines and other
27       architectures; with some modifications, even on other SQL database
28       products.
29
30       The alternative archive file formats must be used with pg_restore(1) to
31       rebuild the database. They allow pg_restore to be selective about what
32       is restored, or even to reorder the items prior to being restored. The
33       archive file formats are designed to be portable across architectures.
34
35       When used with one of the archive file formats and combined with
36       pg_restore, pg_dump provides a flexible archival and transfer
37       mechanism.  pg_dump can be used to backup an entire database, then
38       pg_restore can be used to examine the archive and/or select which parts
39       of the database are to be restored. The most flexible output file
40       formats are the “custom” format (-Fc) and the “directory” format (-Fd).
41       They allow for selection and reordering of all archived items, support
42       parallel restoration, and are compressed by default. The “directory”
43       format is the only format that supports parallel dumps.
44
45       While running pg_dump, one should examine the output for any warnings
46       (printed on standard error), especially in light of the limitations
47       listed below.
48

OPTIONS

50       The following command-line options control the content and format of
51       the output.
52
53       dbname
54           Specifies the name of the database to be dumped. If this is not
55           specified, the environment variable PGDATABASE is used. If that is
56           not set, the user name specified for the connection is used.
57
58       -a
59       --data-only
60           Dump only the data, not the schema (data definitions). Table data,
61           large objects, and sequence values are dumped.
62
63           This option is similar to, but for historical reasons not identical
64           to, specifying --section=data.
65
66       -b
67       --blobs
68           Include large objects in the dump. This is the default behavior
69           except when --schema, --table, or --schema-only is specified. The
70           -b switch is therefore only useful to add large objects to dumps
71           where a specific schema or table has been requested. Note that
72           blobs are considered data and therefore will be included when
73           --data-only is used, but not when --schema-only is.
74
75       -B
76       --no-blobs
77           Exclude large objects in the dump.
78
79           When both -b and -B are given, the behavior is to output large
80           objects, when data is being dumped, see the -b documentation.
81
82       -c
83       --clean
84           Output commands to clean (drop) database objects prior to
85           outputting the commands for creating them. (Unless --if-exists is
86           also specified, restore might generate some harmless error
87           messages, if any objects were not present in the destination
88           database.)
89
90           This option is only meaningful for the plain-text format. For the
91           archive formats, you can specify the option when you call
92           pg_restore.
93
94       -C
95       --create
96           Begin the output with a command to create the database itself and
97           reconnect to the created database. (With a script of this form, it
98           doesn't matter which database in the destination installation you
99           connect to before running the script.) If --clean is also
100           specified, the script drops and recreates the target database
101           before reconnecting to it.
102
103           With --create, the output also includes the database's comment if
104           any, and any configuration variable settings that are specific to
105           this database, that is, any ALTER DATABASE ... SET ...  and ALTER
106           ROLE ... IN DATABASE ... SET ...  commands that mention this
107           database. Access privileges for the database itself are also
108           dumped, unless --no-acl is specified.
109
110           This option is only meaningful for the plain-text format. For the
111           archive formats, you can specify the option when you call
112           pg_restore.
113
114       -E encoding
115       --encoding=encoding
116           Create the dump in the specified character set encoding. By
117           default, the dump is created in the database encoding. (Another way
118           to get the same result is to set the PGCLIENTENCODING environment
119           variable to the desired dump encoding.)
120
121       -f file
122       --file=file
123           Send output to the specified file. This parameter can be omitted
124           for file based output formats, in which case the standard output is
125           used. It must be given for the directory output format however,
126           where it specifies the target directory instead of a file. In this
127           case the directory is created by pg_dump and must not exist before.
128
129       -F format
130       --format=format
131           Selects the format of the output.  format can be one of the
132           following:
133
134           p
135           plain
136               Output a plain-text SQL script file (the default).
137
138           c
139           custom
140               Output a custom-format archive suitable for input into
141               pg_restore. Together with the directory output format, this is
142               the most flexible output format in that it allows manual
143               selection and reordering of archived items during restore. This
144               format is also compressed by default.
145
146           d
147           directory
148               Output a directory-format archive suitable for input into
149               pg_restore. This will create a directory with one file for each
150               table and blob being dumped, plus a so-called Table of Contents
151               file describing the dumped objects in a machine-readable format
152               that pg_restore can read. A directory format archive can be
153               manipulated with standard Unix tools; for example, files in an
154               uncompressed archive can be compressed with the gzip tool. This
155               format is compressed by default and also supports parallel
156               dumps.
157
158           t
159           tar
160               Output a tar-format archive suitable for input into pg_restore.
161               The tar format is compatible with the directory format:
162               extracting a tar-format archive produces a valid
163               directory-format archive. However, the tar format does not
164               support compression. Also, when using tar format the relative
165               order of table data items cannot be changed during restore.
166
167       -j njobs
168       --jobs=njobs
169           Run the dump in parallel by dumping njobs tables simultaneously.
170           This option reduces the time of the dump but it also increases the
171           load on the database server. You can only use this option with the
172           directory output format because this is the only output format
173           where multiple processes can write their data at the same time.
174
175           pg_dump will open njobs + 1 connections to the database, so make
176           sure your max_connections setting is high enough to accommodate all
177           connections.
178
179           Requesting exclusive locks on database objects while running a
180           parallel dump could cause the dump to fail. The reason is that the
181           pg_dump master process requests shared locks on the objects that
182           the worker processes are going to dump later in order to make sure
183           that nobody deletes them and makes them go away while the dump is
184           running. If another client then requests an exclusive lock on a
185           table, that lock will not be granted but will be queued waiting for
186           the shared lock of the master process to be released. Consequently
187           any other access to the table will not be granted either and will
188           queue after the exclusive lock request. This includes the worker
189           process trying to dump the table. Without any precautions this
190           would be a classic deadlock situation. To detect this conflict, the
191           pg_dump worker process requests another shared lock using the
192           NOWAIT option. If the worker process is not granted this shared
193           lock, somebody else must have requested an exclusive lock in the
194           meantime and there is no way to continue with the dump, so pg_dump
195           has no choice but to abort the dump.
196
197           For a consistent backup, the database server needs to support
198           synchronized snapshots, a feature that was introduced in PostgreSQL
199           9.2 for primary servers and 10 for standbys. With this feature,
200           database clients can ensure they see the same data set even though
201           they use different connections.  pg_dump -j uses multiple database
202           connections; it connects to the database once with the master
203           process and once again for each worker job. Without the
204           synchronized snapshot feature, the different worker jobs wouldn't
205           be guaranteed to see the same data in each connection, which could
206           lead to an inconsistent backup.
207
208           If you want to run a parallel dump of a pre-9.2 server, you need to
209           make sure that the database content doesn't change from between the
210           time the master connects to the database until the last worker job
211           has connected to the database. The easiest way to do this is to
212           halt any data modifying processes (DDL and DML) accessing the
213           database before starting the backup. You also need to specify the
214           --no-synchronized-snapshots parameter when running pg_dump -j
215           against a pre-9.2 PostgreSQL server.
216
217       -n schema
218       --schema=schema
219           Dump only schemas matching schema; this selects both the schema
220           itself, and all its contained objects. When this option is not
221           specified, all non-system schemas in the target database will be
222           dumped. Multiple schemas can be selected by writing multiple -n
223           switches. Also, the schema parameter is interpreted as a pattern
224           according to the same rules used by psql's \d commands (see
225           Patterns), so multiple schemas can also be selected by writing
226           wildcard characters in the pattern. When using wildcards, be
227           careful to quote the pattern if needed to prevent the shell from
228           expanding the wildcards; see EXAMPLES.
229
230               Note
231               When -n is specified, pg_dump makes no attempt to dump any
232               other database objects that the selected schema(s) might depend
233               upon. Therefore, there is no guarantee that the results of a
234               specific-schema dump can be successfully restored by themselves
235               into a clean database.
236
237               Note
238               Non-schema objects such as blobs are not dumped when -n is
239               specified. You can add blobs back to the dump with the --blobs
240               switch.
241
242       -N schema
243       --exclude-schema=schema
244           Do not dump any schemas matching the schema pattern. The pattern is
245           interpreted according to the same rules as for -n.  -N can be given
246           more than once to exclude schemas matching any of several patterns.
247
248           When both -n and -N are given, the behavior is to dump just the
249           schemas that match at least one -n switch but no -N switches. If -N
250           appears without -n, then schemas matching -N are excluded from what
251           is otherwise a normal dump.
252
253       -o
254       --oids
255           Dump object identifiers (OIDs) as part of the data for every table.
256           Use this option if your application references the OID columns in
257           some way (e.g., in a foreign key constraint). Otherwise, this
258           option should not be used.
259
260       -O
261       --no-owner
262           Do not output commands to set ownership of objects to match the
263           original database. By default, pg_dump issues ALTER OWNER or SET
264           SESSION AUTHORIZATION statements to set ownership of created
265           database objects. These statements will fail when the script is run
266           unless it is started by a superuser (or the same user that owns all
267           of the objects in the script). To make a script that can be
268           restored by any user, but will give that user ownership of all the
269           objects, specify -O.
270
271           This option is only meaningful for the plain-text format. For the
272           archive formats, you can specify the option when you call
273           pg_restore.
274
275       -R
276       --no-reconnect
277           This option is obsolete but still accepted for backwards
278           compatibility.
279
280       -s
281       --schema-only
282           Dump only the object definitions (schema), not data.
283
284           This option is the inverse of --data-only. It is similar to, but
285           for historical reasons not identical to, specifying
286           --section=pre-data --section=post-data.
287
288           (Do not confuse this with the --schema option, which uses the word
289           “schema” in a different meaning.)
290
291           To exclude table data for only a subset of tables in the database,
292           see --exclude-table-data.
293
294       -S username
295       --superuser=username
296           Specify the superuser user name to use when disabling triggers.
297           This is relevant only if --disable-triggers is used. (Usually, it's
298           better to leave this out, and instead start the resulting script as
299           superuser.)
300
301       -t table
302       --table=table
303           Dump only tables with names matching table. For this purpose,
304           “table” includes views, materialized views, sequences, and foreign
305           tables. Multiple tables can be selected by writing multiple -t
306           switches. Also, the table parameter is interpreted as a pattern
307           according to the same rules used by psql's \d commands (see
308           Patterns), so multiple tables can also be selected by writing
309           wildcard characters in the pattern. When using wildcards, be
310           careful to quote the pattern if needed to prevent the shell from
311           expanding the wildcards; see EXAMPLES.
312
313           The -n and -N switches have no effect when -t is used, because
314           tables selected by -t will be dumped regardless of those switches,
315           and non-table objects will not be dumped.
316
317               Note
318               When -t is specified, pg_dump makes no attempt to dump any
319               other database objects that the selected table(s) might depend
320               upon. Therefore, there is no guarantee that the results of a
321               specific-table dump can be successfully restored by themselves
322               into a clean database.
323
324               Note
325               The behavior of the -t switch is not entirely upward compatible
326               with pre-8.2 PostgreSQL versions. Formerly, writing -t tab
327               would dump all tables named tab, but now it just dumps
328               whichever one is visible in your default search path. To get
329               the old behavior you can write -t '*.tab'. Also, you must write
330               something like -t sch.tab to select a table in a particular
331               schema, rather than the old locution of -n sch -t tab.
332
333       -T table
334       --exclude-table=table
335           Do not dump any tables matching the table pattern. The pattern is
336           interpreted according to the same rules as for -t.  -T can be given
337           more than once to exclude tables matching any of several patterns.
338
339           When both -t and -T are given, the behavior is to dump just the
340           tables that match at least one -t switch but no -T switches. If -T
341           appears without -t, then tables matching -T are excluded from what
342           is otherwise a normal dump.
343
344       -v
345       --verbose
346           Specifies verbose mode. This will cause pg_dump to output detailed
347           object comments and start/stop times to the dump file, and progress
348           messages to standard error.
349
350       -V
351       --version
352           Print the pg_dump version and exit.
353
354       -x
355       --no-privileges
356       --no-acl
357           Prevent dumping of access privileges (grant/revoke commands).
358
359       -Z 0..9
360       --compress=0..9
361           Specify the compression level to use. Zero means no compression.
362           For the custom archive format, this specifies compression of
363           individual table-data segments, and the default is to compress at a
364           moderate level. For plain text output, setting a nonzero
365           compression level causes the entire output file to be compressed,
366           as though it had been fed through gzip; but the default is not to
367           compress. The tar archive format currently does not support
368           compression at all.
369
370       --binary-upgrade
371           This option is for use by in-place upgrade utilities. Its use for
372           other purposes is not recommended or supported. The behavior of the
373           option may change in future releases without notice.
374
375       --column-inserts
376       --attribute-inserts
377           Dump data as INSERT commands with explicit column names (INSERT
378           INTO table (column, ...) VALUES ...). This will make restoration
379           very slow; it is mainly useful for making dumps that can be loaded
380           into non-PostgreSQL databases. However, since this option generates
381           a separate command for each row, an error in reloading a row causes
382           only that row to be lost rather than the entire table contents.
383
384       --disable-dollar-quoting
385           This option disables the use of dollar quoting for function bodies,
386           and forces them to be quoted using SQL standard string syntax.
387
388       --disable-triggers
389           This option is relevant only when creating a data-only dump. It
390           instructs pg_dump to include commands to temporarily disable
391           triggers on the target tables while the data is reloaded. Use this
392           if you have referential integrity checks or other triggers on the
393           tables that you do not want to invoke during data reload.
394
395           Presently, the commands emitted for --disable-triggers must be done
396           as superuser. So, you should also specify a superuser name with -S,
397           or preferably be careful to start the resulting script as a
398           superuser.
399
400           This option is only meaningful for the plain-text format. For the
401           archive formats, you can specify the option when you call
402           pg_restore.
403
404       --enable-row-security
405           This option is relevant only when dumping the contents of a table
406           which has row security. By default, pg_dump will set row_security
407           to off, to ensure that all data is dumped from the table. If the
408           user does not have sufficient privileges to bypass row security,
409           then an error is thrown. This parameter instructs pg_dump to set
410           row_security to on instead, allowing the user to dump the parts of
411           the contents of the table that they have access to.
412
413           Note that if you use this option currently, you probably also want
414           the dump be in INSERT format, as the COPY FROM during restore does
415           not support row security.
416
417       --exclude-table-data=table
418           Do not dump data for any tables matching the table pattern. The
419           pattern is interpreted according to the same rules as for -t.
420           --exclude-table-data can be given more than once to exclude tables
421           matching any of several patterns. This option is useful when you
422           need the definition of a particular table even though you do not
423           need the data in it.
424
425           To exclude data for all tables in the database, see --schema-only.
426
427       --if-exists
428           Use conditional commands (i.e. add an IF EXISTS clause) when
429           cleaning database objects. This option is not valid unless --clean
430           is also specified.
431
432       --inserts
433           Dump data as INSERT commands (rather than COPY). This will make
434           restoration very slow; it is mainly useful for making dumps that
435           can be loaded into non-PostgreSQL databases. However, since this
436           option generates a separate command for each row, an error in
437           reloading a row causes only that row to be lost rather than the
438           entire table contents. Note that the restore might fail altogether
439           if you have rearranged column order. The --column-inserts option is
440           safe against column order changes, though even slower.
441
442       --load-via-partition-root
443           When dumping data for a table partition, make the COPY or INSERT
444           statements target the root of the partitioning hierarchy that
445           contains it, rather than the partition itself. This causes the
446           appropriate partition to be re-determined for each row when the
447           data is loaded. This may be useful when reloading data on a server
448           where rows do not always fall into the same partitions as they did
449           on the original server. That could happen, for example, if the
450           partitioning column is of type text and the two systems have
451           different definitions of the collation used to sort the
452           partitioning column.
453
454           It is best not to use parallelism when restoring from an archive
455           made with this option, because pg_restore will not know exactly
456           which partition(s) a given archive data item will load data into.
457           This could result in inefficiency due to lock conflicts between
458           parallel jobs, or perhaps even reload failures due to foreign key
459           constraints being set up before all the relevant data is loaded.
460
461       --lock-wait-timeout=timeout
462           Do not wait forever to acquire shared table locks at the beginning
463           of the dump. Instead fail if unable to lock a table within the
464           specified timeout. The timeout may be specified in any of the
465           formats accepted by SET statement_timeout. (Allowed formats vary
466           depending on the server version you are dumping from, but an
467           integer number of milliseconds is accepted by all versions.)
468
469       --no-comments
470           Do not dump comments.
471
472       --no-publications
473           Do not dump publications.
474
475       --no-security-labels
476           Do not dump security labels.
477
478       --no-subscriptions
479           Do not dump subscriptions.
480
481       --no-sync
482           By default, pg_dump will wait for all files to be written safely to
483           disk. This option causes pg_dump to return without waiting, which
484           is faster, but means that a subsequent operating system crash can
485           leave the dump corrupt. Generally, this option is useful for
486           testing but should not be used when dumping data from production
487           installation.
488
489       --no-synchronized-snapshots
490           This option allows running pg_dump -j against a pre-9.2 server, see
491           the documentation of the -j parameter for more details.
492
493       --no-tablespaces
494           Do not output commands to select tablespaces. With this option, all
495           objects will be created in whichever tablespace is the default
496           during restore.
497
498           This option is only meaningful for the plain-text format. For the
499           archive formats, you can specify the option when you call
500           pg_restore.
501
502       --no-unlogged-table-data
503           Do not dump the contents of unlogged tables. This option has no
504           effect on whether or not the table definitions (schema) are dumped;
505           it only suppresses dumping the table data. Data in unlogged tables
506           is always excluded when dumping from a standby server.
507
508       --quote-all-identifiers
509           Force quoting of all identifiers. This option is recommended when
510           dumping a database from a server whose PostgreSQL major version is
511           different from pg_dump's, or when the output is intended to be
512           loaded into a server of a different major version. By default,
513           pg_dump quotes only identifiers that are reserved words in its own
514           major version. This sometimes results in compatibility issues when
515           dealing with servers of other versions that may have slightly
516           different sets of reserved words. Using --quote-all-identifiers
517           prevents such issues, at the price of a harder-to-read dump script.
518
519       --section=sectionname
520           Only dump the named section. The section name can be pre-data,
521           data, or post-data. This option can be specified more than once to
522           select multiple sections. The default is to dump all sections.
523
524           The data section contains actual table data, large-object contents,
525           and sequence values. Post-data items include definitions of
526           indexes, triggers, rules, and constraints other than validated
527           check constraints. Pre-data items include all other data definition
528           items.
529
530       --serializable-deferrable
531           Use a serializable transaction for the dump, to ensure that the
532           snapshot used is consistent with later database states; but do this
533           by waiting for a point in the transaction stream at which no
534           anomalies can be present, so that there isn't a risk of the dump
535           failing or causing other transactions to roll back with a
536           serialization_failure. See Chapter 13 for more information about
537           transaction isolation and concurrency control.
538
539           This option is not beneficial for a dump which is intended only for
540           disaster recovery. It could be useful for a dump used to load a
541           copy of the database for reporting or other read-only load sharing
542           while the original database continues to be updated. Without it the
543           dump may reflect a state which is not consistent with any serial
544           execution of the transactions eventually committed. For example, if
545           batch processing techniques are used, a batch may show as closed in
546           the dump without all of the items which are in the batch appearing.
547
548           This option will make no difference if there are no read-write
549           transactions active when pg_dump is started. If read-write
550           transactions are active, the start of the dump may be delayed for
551           an indeterminate length of time. Once running, performance with or
552           without the switch is the same.
553
554       --snapshot=snapshotname
555           Use the specified synchronized snapshot when making a dump of the
556           database (see Table 9.82 for more details).
557
558           This option is useful when needing to synchronize the dump with a
559           logical replication slot (see Chapter 49) or with a concurrent
560           session.
561
562           In the case of a parallel dump, the snapshot name defined by this
563           option is used rather than taking a new snapshot.
564
565       --strict-names
566           Require that each schema (-n/--schema) and table (-t/--table)
567           qualifier match at least one schema/table in the database to be
568           dumped. Note that if none of the schema/table qualifiers find
569           matches, pg_dump will generate an error even without
570           --strict-names.
571
572           This option has no effect on -N/--exclude-schema,
573           -T/--exclude-table, or --exclude-table-data. An exclude pattern
574           failing to match any objects is not considered an error.
575
576       --use-set-session-authorization
577           Output SQL-standard SET SESSION AUTHORIZATION commands instead of
578           ALTER OWNER commands to determine object ownership. This makes the
579           dump more standards-compatible, but depending on the history of the
580           objects in the dump, might not restore properly. Also, a dump using
581           SET SESSION AUTHORIZATION will certainly require superuser
582           privileges to restore correctly, whereas ALTER OWNER requires
583           lesser privileges.
584
585       -?
586       --help
587           Show help about pg_dump command line arguments, and exit.
588
589       The following command-line options control the database connection
590       parameters.
591
592       -d dbname
593       --dbname=dbname
594           Specifies the name of the database to connect to. This is
595           equivalent to specifying dbname as the first non-option argument on
596           the command line.
597
598           If this parameter contains an = sign or starts with a valid URI
599           prefix (postgresql:// or postgres://), it is treated as a conninfo
600           string. See Section 34.1 for more information.
601
602       -h host
603       --host=host
604           Specifies the host name of the machine on which the server is
605           running. If the value begins with a slash, it is used as the
606           directory for the Unix domain socket. The default is taken from the
607           PGHOST environment variable, if set, else a Unix domain socket
608           connection is attempted.
609
610       -p port
611       --port=port
612           Specifies the TCP port or local Unix domain socket file extension
613           on which the server is listening for connections. Defaults to the
614           PGPORT environment variable, if set, or a compiled-in default.
615
616       -U username
617       --username=username
618           User name to connect as.
619
620       -w
621       --no-password
622           Never issue a password prompt. If the server requires password
623           authentication and a password is not available by other means such
624           as a .pgpass file, the connection attempt will fail. This option
625           can be useful in batch jobs and scripts where no user is present to
626           enter a password.
627
628       -W
629       --password
630           Force pg_dump to prompt for a password before connecting to a
631           database.
632
633           This option is never essential, since pg_dump will automatically
634           prompt for a password if the server demands password
635           authentication. However, pg_dump will waste a connection attempt
636           finding out that the server wants a password. In some cases it is
637           worth typing -W to avoid the extra connection attempt.
638
639       --role=rolename
640           Specifies a role name to be used to create the dump. This option
641           causes pg_dump to issue a SET ROLE rolename command after
642           connecting to the database. It is useful when the authenticated
643           user (specified by -U) lacks privileges needed by pg_dump, but can
644           switch to a role with the required rights. Some installations have
645           a policy against logging in directly as a superuser, and use of
646           this option allows dumps to be made without violating the policy.
647

ENVIRONMENT

649       PGDATABASE
650       PGHOST
651       PGOPTIONS
652       PGPORT
653       PGUSER
654           Default connection parameters.
655
656       This utility, like most other PostgreSQL utilities, also uses the
657       environment variables supported by libpq (see Section 34.14).
658

DIAGNOSTICS

660       pg_dump internally executes SELECT statements. If you have problems
661       running pg_dump, make sure you are able to select information from the
662       database using, for example, psql(1). Also, any default connection
663       settings and environment variables used by the libpq front-end library
664       will apply.
665
666       The database activity of pg_dump is normally collected by the
667       statistics collector. If this is undesirable, you can set parameter
668       track_counts to false via PGOPTIONS or the ALTER USER command.
669

NOTES

671       If your database cluster has any local additions to the template1
672       database, be careful to restore the output of pg_dump into a truly
673       empty database; otherwise you are likely to get errors due to duplicate
674       definitions of the added objects. To make an empty database without any
675       local additions, copy from template0 not template1, for example:
676
677           CREATE DATABASE foo WITH TEMPLATE template0;
678
679       When a data-only dump is chosen and the option --disable-triggers is
680       used, pg_dump emits commands to disable triggers on user tables before
681       inserting the data, and then commands to re-enable them after the data
682       has been inserted. If the restore is stopped in the middle, the system
683       catalogs might be left in the wrong state.
684
685       The dump file produced by pg_dump does not contain the statistics used
686       by the optimizer to make query planning decisions. Therefore, it is
687       wise to run ANALYZE after restoring from a dump file to ensure optimal
688       performance; see Section 24.1.3 and Section 24.1.6 for more
689       information.
690
691       Because pg_dump is used to transfer data to newer versions of
692       PostgreSQL, the output of pg_dump can be expected to load into
693       PostgreSQL server versions newer than pg_dump's version.  pg_dump can
694       also dump from PostgreSQL servers older than its own version.
695       (Currently, servers back to version 8.0 are supported.) However,
696       pg_dump cannot dump from PostgreSQL servers newer than its own major
697       version; it will refuse to even try, rather than risk making an invalid
698       dump. Also, it is not guaranteed that pg_dump's output can be loaded
699       into a server of an older major version — not even if the dump was
700       taken from a server of that version. Loading a dump file into an older
701       server may require manual editing of the dump file to remove syntax not
702       understood by the older server. Use of the --quote-all-identifiers
703       option is recommended in cross-version cases, as it can prevent
704       problems arising from varying reserved-word lists in different
705       PostgreSQL versions.
706
707       When dumping logical replication subscriptions, pg_dump will generate
708       CREATE SUBSCRIPTION commands that use the connect = false option, so
709       that restoring the subscription does not make remote connections for
710       creating a replication slot or for initial table copy. That way, the
711       dump can be restored without requiring network access to the remote
712       servers. It is then up to the user to reactivate the subscriptions in a
713       suitable way. If the involved hosts have changed, the connection
714       information might have to be changed. It might also be appropriate to
715       truncate the target tables before initiating a new full table copy.
716

EXAMPLES

718       To dump a database called mydb into a SQL-script file:
719
720           $ pg_dump mydb > db.sql
721
722       To reload such a script into a (freshly created) database named newdb:
723
724           $ psql -d newdb -f db.sql
725
726       To dump a database into a custom-format archive file:
727
728           $ pg_dump -Fc mydb > db.dump
729
730       To dump a database into a directory-format archive:
731
732           $ pg_dump -Fd mydb -f dumpdir
733
734       To dump a database into a directory-format archive in parallel with 5
735       worker jobs:
736
737           $ pg_dump -Fd mydb -j 5 -f dumpdir
738
739       To reload an archive file into a (freshly created) database named
740       newdb:
741
742           $ pg_restore -d newdb db.dump
743
744       To reload an archive file into the same database it was dumped from,
745       discarding the current contents of that database:
746
747           $ pg_restore -d postgres --clean --create db.dump
748
749       To dump a single table named mytab:
750
751           $ pg_dump -t mytab mydb > db.sql
752
753       To dump all tables whose names start with emp in the detroit schema,
754       except for the table named employee_log:
755
756           $ pg_dump -t 'detroit.emp*' -T detroit.employee_log mydb > db.sql
757
758       To dump all schemas whose names start with east or west and end in gsm,
759       excluding any schemas whose names contain the word test:
760
761           $ pg_dump -n 'east*gsm' -n 'west*gsm' -N '*test*' mydb > db.sql
762
763       The same, using regular expression notation to consolidate the
764       switches:
765
766           $ pg_dump -n '(east|west)*gsm' -N '*test*' mydb > db.sql
767
768       To dump all database objects except for tables whose names begin with
769       ts_:
770
771           $ pg_dump -T 'ts_*' mydb > db.sql
772
773       To specify an upper-case or mixed-case name in -t and related switches,
774       you need to double-quote the name; else it will be folded to lower case
775       (see Patterns). But double quotes are special to the shell, so in turn
776       they must be quoted. Thus, to dump a single table with a mixed-case
777       name, you need something like
778
779           $ pg_dump -t "\"MixedCaseName\"" mydb > mytab.sql
780

SEE ALSO

782       pg_dumpall(1), pg_restore(1), psql(1)
783
784
785
786PostgreSQL 11.3                      2019                           PG_DUMP(1)
Impressum