1PG_DUMP(1)               PostgreSQL 12.6 Documentation              PG_DUMP(1)
2
3
4

NAME

6       pg_dump - extract a PostgreSQL database into a script file or other
7       archive file
8

SYNOPSIS

10       pg_dump [connection-option...] [option...] [dbname]
11

DESCRIPTION

13       pg_dump is a utility for backing up a PostgreSQL database. It makes
14       consistent backups even if the database is being used concurrently.
15       pg_dump does not block other users accessing the database (readers or
16       writers).
17
18       pg_dump only dumps a single database. To back up an entire cluster, or
19       to back up global objects that are common to all databases in a cluster
20       (such as roles and tablespaces), use pg_dumpall(1).
21
22       Dumps can be output in script or archive file formats. Script dumps are
23       plain-text files containing the SQL commands required to reconstruct
24       the database to the state it was in at the time it was saved. To
25       restore from such a script, feed it to psql(1). Script files can be
26       used to reconstruct the database even on other machines and other
27       architectures; with some modifications, even on other SQL database
28       products.
29
30       The alternative archive file formats must be used with pg_restore(1) to
31       rebuild the database. They allow pg_restore to be selective about what
32       is restored, or even to reorder the items prior to being restored. The
33       archive file formats are designed to be portable across architectures.
34
35       When used with one of the archive file formats and combined with
36       pg_restore, pg_dump provides a flexible archival and transfer
37       mechanism.  pg_dump can be used to backup an entire database, then
38       pg_restore can be used to examine the archive and/or select which parts
39       of the database are to be restored. The most flexible output file
40       formats are the “custom” format (-Fc) and the “directory” format (-Fd).
41       They allow for selection and reordering of all archived items, support
42       parallel restoration, and are compressed by default. The “directory”
43       format is the only format that supports parallel dumps.
44
45       While running pg_dump, one should examine the output for any warnings
46       (printed on standard error), especially in light of the limitations
47       listed below.
48

OPTIONS

50       The following command-line options control the content and format of
51       the output.
52
53       dbname
54           Specifies the name of the database to be dumped. If this is not
55           specified, the environment variable PGDATABASE is used. If that is
56           not set, the user name specified for the connection is used.
57
58       -a
59       --data-only
60           Dump only the data, not the schema (data definitions). Table data,
61           large objects, and sequence values are dumped.
62
63           This option is similar to, but for historical reasons not identical
64           to, specifying --section=data.
65
66       -b
67       --blobs
68           Include large objects in the dump. This is the default behavior
69           except when --schema, --table, or --schema-only is specified. The
70           -b switch is therefore only useful to add large objects to dumps
71           where a specific schema or table has been requested. Note that
72           blobs are considered data and therefore will be included when
73           --data-only is used, but not when --schema-only is.
74
75       -B
76       --no-blobs
77           Exclude large objects in the dump.
78
79           When both -b and -B are given, the behavior is to output large
80           objects, when data is being dumped, see the -b documentation.
81
82       -c
83       --clean
84           Output commands to clean (drop) database objects prior to
85           outputting the commands for creating them. (Unless --if-exists is
86           also specified, restore might generate some harmless error
87           messages, if any objects were not present in the destination
88           database.)
89
90           This option is ignored when emitting an archive (non-text) output
91           file. For the archive formats, you can specify the option when you
92           call pg_restore.
93
94       -C
95       --create
96           Begin the output with a command to create the database itself and
97           reconnect to the created database. (With a script of this form, it
98           doesn't matter which database in the destination installation you
99           connect to before running the script.) If --clean is also
100           specified, the script drops and recreates the target database
101           before reconnecting to it.
102
103           With --create, the output also includes the database's comment if
104           any, and any configuration variable settings that are specific to
105           this database, that is, any ALTER DATABASE ... SET ...  and ALTER
106           ROLE ... IN DATABASE ... SET ...  commands that mention this
107           database. Access privileges for the database itself are also
108           dumped, unless --no-acl is specified.
109
110           This option is ignored when emitting an archive (non-text) output
111           file. For the archive formats, you can specify the option when you
112           call pg_restore.
113
114       -E encoding
115       --encoding=encoding
116           Create the dump in the specified character set encoding. By
117           default, the dump is created in the database encoding. (Another way
118           to get the same result is to set the PGCLIENTENCODING environment
119           variable to the desired dump encoding.)
120
121       -f file
122       --file=file
123           Send output to the specified file. This parameter can be omitted
124           for file based output formats, in which case the standard output is
125           used. It must be given for the directory output format however,
126           where it specifies the target directory instead of a file. In this
127           case the directory is created by pg_dump and must not exist before.
128
129       -F format
130       --format=format
131           Selects the format of the output.  format can be one of the
132           following:
133
134           p
135           plain
136               Output a plain-text SQL script file (the default).
137
138           c
139           custom
140               Output a custom-format archive suitable for input into
141               pg_restore. Together with the directory output format, this is
142               the most flexible output format in that it allows manual
143               selection and reordering of archived items during restore. This
144               format is also compressed by default.
145
146           d
147           directory
148               Output a directory-format archive suitable for input into
149               pg_restore. This will create a directory with one file for each
150               table and blob being dumped, plus a so-called Table of Contents
151               file describing the dumped objects in a machine-readable format
152               that pg_restore can read. A directory format archive can be
153               manipulated with standard Unix tools; for example, files in an
154               uncompressed archive can be compressed with the gzip tool. This
155               format is compressed by default and also supports parallel
156               dumps.
157
158           t
159           tar
160               Output a tar-format archive suitable for input into pg_restore.
161               The tar format is compatible with the directory format:
162               extracting a tar-format archive produces a valid
163               directory-format archive. However, the tar format does not
164               support compression. Also, when using tar format the relative
165               order of table data items cannot be changed during restore.
166
167       -j njobs
168       --jobs=njobs
169           Run the dump in parallel by dumping njobs tables simultaneously.
170           This option reduces the time of the dump but it also increases the
171           load on the database server. You can only use this option with the
172           directory output format because this is the only output format
173           where multiple processes can write their data at the same time.
174
175           pg_dump will open njobs + 1 connections to the database, so make
176           sure your max_connections setting is high enough to accommodate all
177           connections.
178
179           Requesting exclusive locks on database objects while running a
180           parallel dump could cause the dump to fail. The reason is that the
181           pg_dump master process requests shared locks on the objects that
182           the worker processes are going to dump later in order to make sure
183           that nobody deletes them and makes them go away while the dump is
184           running. If another client then requests an exclusive lock on a
185           table, that lock will not be granted but will be queued waiting for
186           the shared lock of the master process to be released. Consequently
187           any other access to the table will not be granted either and will
188           queue after the exclusive lock request. This includes the worker
189           process trying to dump the table. Without any precautions this
190           would be a classic deadlock situation. To detect this conflict, the
191           pg_dump worker process requests another shared lock using the
192           NOWAIT option. If the worker process is not granted this shared
193           lock, somebody else must have requested an exclusive lock in the
194           meantime and there is no way to continue with the dump, so pg_dump
195           has no choice but to abort the dump.
196
197           For a consistent backup, the database server needs to support
198           synchronized snapshots, a feature that was introduced in PostgreSQL
199           9.2 for primary servers and 10 for standbys. With this feature,
200           database clients can ensure they see the same data set even though
201           they use different connections.  pg_dump -j uses multiple database
202           connections; it connects to the database once with the master
203           process and once again for each worker job. Without the
204           synchronized snapshot feature, the different worker jobs wouldn't
205           be guaranteed to see the same data in each connection, which could
206           lead to an inconsistent backup.
207
208           If you want to run a parallel dump of a pre-9.2 server, you need to
209           make sure that the database content doesn't change from between the
210           time the master connects to the database until the last worker job
211           has connected to the database. The easiest way to do this is to
212           halt any data modifying processes (DDL and DML) accessing the
213           database before starting the backup. You also need to specify the
214           --no-synchronized-snapshots parameter when running pg_dump -j
215           against a pre-9.2 PostgreSQL server.
216
217       -n pattern
218       --schema=pattern
219           Dump only schemas matching pattern; this selects both the schema
220           itself, and all its contained objects. When this option is not
221           specified, all non-system schemas in the target database will be
222           dumped. Multiple schemas can be selected by writing multiple -n
223           switches. The pattern parameter is interpreted as a pattern
224           according to the same rules used by psql's \d commands (see
225           Patterns), so multiple schemas can also be selected by writing
226           wildcard characters in the pattern. When using wildcards, be
227           careful to quote the pattern if needed to prevent the shell from
228           expanding the wildcards; see EXAMPLES.
229
230               Note
231               When -n is specified, pg_dump makes no attempt to dump any
232               other database objects that the selected schema(s) might depend
233               upon. Therefore, there is no guarantee that the results of a
234               specific-schema dump can be successfully restored by themselves
235               into a clean database.
236
237               Note
238               Non-schema objects such as blobs are not dumped when -n is
239               specified. You can add blobs back to the dump with the --blobs
240               switch.
241
242       -N pattern
243       --exclude-schema=pattern
244           Do not dump any schemas matching pattern. The pattern is
245           interpreted according to the same rules as for -n.  -N can be given
246           more than once to exclude schemas matching any of several patterns.
247
248           When both -n and -N are given, the behavior is to dump just the
249           schemas that match at least one -n switch but no -N switches. If -N
250           appears without -n, then schemas matching -N are excluded from what
251           is otherwise a normal dump.
252
253       -O
254       --no-owner
255           Do not output commands to set ownership of objects to match the
256           original database. By default, pg_dump issues ALTER OWNER or SET
257           SESSION AUTHORIZATION statements to set ownership of created
258           database objects. These statements will fail when the script is run
259           unless it is started by a superuser (or the same user that owns all
260           of the objects in the script). To make a script that can be
261           restored by any user, but will give that user ownership of all the
262           objects, specify -O.
263
264           This option is ignored when emitting an archive (non-text) output
265           file. For the archive formats, you can specify the option when you
266           call pg_restore.
267
268       -R
269       --no-reconnect
270           This option is obsolete but still accepted for backwards
271           compatibility.
272
273       -s
274       --schema-only
275           Dump only the object definitions (schema), not data.
276
277           This option is the inverse of --data-only. It is similar to, but
278           for historical reasons not identical to, specifying
279           --section=pre-data --section=post-data.
280
281           (Do not confuse this with the --schema option, which uses the word
282           “schema” in a different meaning.)
283
284           To exclude table data for only a subset of tables in the database,
285           see --exclude-table-data.
286
287       -S username
288       --superuser=username
289           Specify the superuser user name to use when disabling triggers.
290           This is relevant only if --disable-triggers is used. (Usually, it's
291           better to leave this out, and instead start the resulting script as
292           superuser.)
293
294       -t pattern
295       --table=pattern
296           Dump only tables with names matching pattern. For this purpose,
297           “table” includes views, materialized views, sequences, and foreign
298           tables. Multiple tables can be selected by writing multiple -t
299           switches. The pattern parameter is interpreted as a pattern
300           according to the same rules used by psql's \d commands (see
301           Patterns), so multiple tables can also be selected by writing
302           wildcard characters in the pattern. When using wildcards, be
303           careful to quote the pattern if needed to prevent the shell from
304           expanding the wildcards; see EXAMPLES.
305
306           The -n and -N switches have no effect when -t is used, because
307           tables selected by -t will be dumped regardless of those switches,
308           and non-table objects will not be dumped.
309
310               Note
311               When -t is specified, pg_dump makes no attempt to dump any
312               other database objects that the selected table(s) might depend
313               upon. Therefore, there is no guarantee that the results of a
314               specific-table dump can be successfully restored by themselves
315               into a clean database.
316
317               Note
318               The behavior of the -t switch is not entirely upward compatible
319               with pre-8.2 PostgreSQL versions. Formerly, writing -t tab
320               would dump all tables named tab, but now it just dumps
321               whichever one is visible in your default search path. To get
322               the old behavior you can write -t '*.tab'. Also, you must write
323               something like -t sch.tab to select a table in a particular
324               schema, rather than the old locution of -n sch -t tab.
325
326       -T pattern
327       --exclude-table=pattern
328           Do not dump any tables matching pattern. The pattern is interpreted
329           according to the same rules as for -t.  -T can be given more than
330           once to exclude tables matching any of several patterns.
331
332           When both -t and -T are given, the behavior is to dump just the
333           tables that match at least one -t switch but no -T switches. If -T
334           appears without -t, then tables matching -T are excluded from what
335           is otherwise a normal dump.
336
337       -v
338       --verbose
339           Specifies verbose mode. This will cause pg_dump to output detailed
340           object comments and start/stop times to the dump file, and progress
341           messages to standard error.
342
343       -V
344       --version
345           Print the pg_dump version and exit.
346
347       -x
348       --no-privileges
349       --no-acl
350           Prevent dumping of access privileges (grant/revoke commands).
351
352       -Z 0..9
353       --compress=0..9
354           Specify the compression level to use. Zero means no compression.
355           For the custom and directory archive formats, this specifies
356           compression of individual table-data segments, and the default is
357           to compress at a moderate level. For plain text output, setting a
358           nonzero compression level causes the entire output file to be
359           compressed, as though it had been fed through gzip; but the default
360           is not to compress. The tar archive format currently does not
361           support compression at all.
362
363       --binary-upgrade
364           This option is for use by in-place upgrade utilities. Its use for
365           other purposes is not recommended or supported. The behavior of the
366           option may change in future releases without notice.
367
368       --column-inserts
369       --attribute-inserts
370           Dump data as INSERT commands with explicit column names (INSERT
371           INTO table (column, ...) VALUES ...). This will make restoration
372           very slow; it is mainly useful for making dumps that can be loaded
373           into non-PostgreSQL databases. Any error during reloading will
374           cause only rows that are part of the problematic INSERT to be lost,
375           rather than the entire table contents.
376
377       --disable-dollar-quoting
378           This option disables the use of dollar quoting for function bodies,
379           and forces them to be quoted using SQL standard string syntax.
380
381       --disable-triggers
382           This option is relevant only when creating a data-only dump. It
383           instructs pg_dump to include commands to temporarily disable
384           triggers on the target tables while the data is reloaded. Use this
385           if you have referential integrity checks or other triggers on the
386           tables that you do not want to invoke during data reload.
387
388           Presently, the commands emitted for --disable-triggers must be done
389           as superuser. So, you should also specify a superuser name with -S,
390           or preferably be careful to start the resulting script as a
391           superuser.
392
393           This option is ignored when emitting an archive (non-text) output
394           file. For the archive formats, you can specify the option when you
395           call pg_restore.
396
397       --enable-row-security
398           This option is relevant only when dumping the contents of a table
399           which has row security. By default, pg_dump will set row_security
400           to off, to ensure that all data is dumped from the table. If the
401           user does not have sufficient privileges to bypass row security,
402           then an error is thrown. This parameter instructs pg_dump to set
403           row_security to on instead, allowing the user to dump the parts of
404           the contents of the table that they have access to.
405
406           Note that if you use this option currently, you probably also want
407           the dump be in INSERT format, as the COPY FROM during restore does
408           not support row security.
409
410       --exclude-table-data=pattern
411           Do not dump data for any tables matching pattern. The pattern is
412           interpreted according to the same rules as for -t.
413           --exclude-table-data can be given more than once to exclude tables
414           matching any of several patterns. This option is useful when you
415           need the definition of a particular table even though you do not
416           need the data in it.
417
418           To exclude data for all tables in the database, see --schema-only.
419
420       --extra-float-digits=ndigits
421           Use the specified value of extra_float_digits when dumping
422           floating-point data, instead of the maximum available precision.
423           Routine dumps made for backup purposes should not use this option.
424
425       --if-exists
426           Use conditional commands (i.e., add an IF EXISTS clause) when
427           cleaning database objects. This option is not valid unless --clean
428           is also specified.
429
430       --inserts
431           Dump data as INSERT commands (rather than COPY). This will make
432           restoration very slow; it is mainly useful for making dumps that
433           can be loaded into non-PostgreSQL databases. Any error during
434           reloading will cause only rows that are part of the problematic
435           INSERT to be lost, rather than the entire table contents. Note that
436           the restore might fail altogether if you have rearranged column
437           order. The --column-inserts option is safe against column order
438           changes, though even slower.
439
440       --load-via-partition-root
441           When dumping data for a table partition, make the COPY or INSERT
442           statements target the root of the partitioning hierarchy that
443           contains it, rather than the partition itself. This causes the
444           appropriate partition to be re-determined for each row when the
445           data is loaded. This may be useful when reloading data on a server
446           where rows do not always fall into the same partitions as they did
447           on the original server. That could happen, for example, if the
448           partitioning column is of type text and the two systems have
449           different definitions of the collation used to sort the
450           partitioning column.
451
452           It is best not to use parallelism when restoring from an archive
453           made with this option, because pg_restore will not know exactly
454           which partition(s) a given archive data item will load data into.
455           This could result in inefficiency due to lock conflicts between
456           parallel jobs, or perhaps even reload failures due to foreign key
457           constraints being set up before all the relevant data is loaded.
458
459       --lock-wait-timeout=timeout
460           Do not wait forever to acquire shared table locks at the beginning
461           of the dump. Instead fail if unable to lock a table within the
462           specified timeout. The timeout may be specified in any of the
463           formats accepted by SET statement_timeout. (Allowed formats vary
464           depending on the server version you are dumping from, but an
465           integer number of milliseconds is accepted by all versions.)
466
467       --no-comments
468           Do not dump comments.
469
470       --no-publications
471           Do not dump publications.
472
473       --no-security-labels
474           Do not dump security labels.
475
476       --no-subscriptions
477           Do not dump subscriptions.
478
479       --no-sync
480           By default, pg_dump will wait for all files to be written safely to
481           disk. This option causes pg_dump to return without waiting, which
482           is faster, but means that a subsequent operating system crash can
483           leave the dump corrupt. Generally, this option is useful for
484           testing but should not be used when dumping data from production
485           installation.
486
487       --no-synchronized-snapshots
488           This option allows running pg_dump -j against a pre-9.2 server, see
489           the documentation of the -j parameter for more details.
490
491       --no-tablespaces
492           Do not output commands to select tablespaces. With this option, all
493           objects will be created in whichever tablespace is the default
494           during restore.
495
496           This option is ignored when emitting an archive (non-text) output
497           file. For the archive formats, you can specify the option when you
498           call pg_restore.
499
500       --no-unlogged-table-data
501           Do not dump the contents of unlogged tables. This option has no
502           effect on whether or not the table definitions (schema) are dumped;
503           it only suppresses dumping the table data. Data in unlogged tables
504           is always excluded when dumping from a standby server.
505
506       --on-conflict-do-nothing
507           Add ON CONFLICT DO NOTHING to INSERT commands. This option is not
508           valid unless --inserts, --column-inserts or --rows-per-insert is
509           also specified.
510
511       --quote-all-identifiers
512           Force quoting of all identifiers. This option is recommended when
513           dumping a database from a server whose PostgreSQL major version is
514           different from pg_dump's, or when the output is intended to be
515           loaded into a server of a different major version. By default,
516           pg_dump quotes only identifiers that are reserved words in its own
517           major version. This sometimes results in compatibility issues when
518           dealing with servers of other versions that may have slightly
519           different sets of reserved words. Using --quote-all-identifiers
520           prevents such issues, at the price of a harder-to-read dump script.
521
522       --rows-per-insert=nrows
523           Dump data as INSERT commands (rather than COPY). Controls the
524           maximum number of rows per INSERT command. The value specified must
525           be a number greater than zero. Any error during reloading will
526           cause only rows that are part of the problematic INSERT to be lost,
527           rather than the entire table contents.
528
529       --section=sectionname
530           Only dump the named section. The section name can be pre-data,
531           data, or post-data. This option can be specified more than once to
532           select multiple sections. The default is to dump all sections.
533
534           The data section contains actual table data, large-object contents,
535           and sequence values. Post-data items include definitions of
536           indexes, triggers, rules, and constraints other than validated
537           check constraints. Pre-data items include all other data definition
538           items.
539
540       --serializable-deferrable
541           Use a serializable transaction for the dump, to ensure that the
542           snapshot used is consistent with later database states; but do this
543           by waiting for a point in the transaction stream at which no
544           anomalies can be present, so that there isn't a risk of the dump
545           failing or causing other transactions to roll back with a
546           serialization_failure. See Chapter 13 for more information about
547           transaction isolation and concurrency control.
548
549           This option is not beneficial for a dump which is intended only for
550           disaster recovery. It could be useful for a dump used to load a
551           copy of the database for reporting or other read-only load sharing
552           while the original database continues to be updated. Without it the
553           dump may reflect a state which is not consistent with any serial
554           execution of the transactions eventually committed. For example, if
555           batch processing techniques are used, a batch may show as closed in
556           the dump without all of the items which are in the batch appearing.
557
558           This option will make no difference if there are no read-write
559           transactions active when pg_dump is started. If read-write
560           transactions are active, the start of the dump may be delayed for
561           an indeterminate length of time. Once running, performance with or
562           without the switch is the same.
563
564       --snapshot=snapshotname
565           Use the specified synchronized snapshot when making a dump of the
566           database (see Table 9.87 for more details).
567
568           This option is useful when needing to synchronize the dump with a
569           logical replication slot (see Chapter 48) or with a concurrent
570           session.
571
572           In the case of a parallel dump, the snapshot name defined by this
573           option is used rather than taking a new snapshot.
574
575       --strict-names
576           Require that each schema (-n/--schema) and table (-t/--table)
577           qualifier match at least one schema/table in the database to be
578           dumped. Note that if none of the schema/table qualifiers find
579           matches, pg_dump will generate an error even without
580           --strict-names.
581
582           This option has no effect on -N/--exclude-schema,
583           -T/--exclude-table, or --exclude-table-data. An exclude pattern
584           failing to match any objects is not considered an error.
585
586       --use-set-session-authorization
587           Output SQL-standard SET SESSION AUTHORIZATION commands instead of
588           ALTER OWNER commands to determine object ownership. This makes the
589           dump more standards-compatible, but depending on the history of the
590           objects in the dump, might not restore properly. Also, a dump using
591           SET SESSION AUTHORIZATION will certainly require superuser
592           privileges to restore correctly, whereas ALTER OWNER requires
593           lesser privileges.
594
595       -?
596       --help
597           Show help about pg_dump command line arguments, and exit.
598
599       The following command-line options control the database connection
600       parameters.
601
602       -d dbname
603       --dbname=dbname
604           Specifies the name of the database to connect to. This is
605           equivalent to specifying dbname as the first non-option argument on
606           the command line. The dbname can be a connection string. If so,
607           connection string parameters will override any conflicting command
608           line options.
609
610       -h host
611       --host=host
612           Specifies the host name of the machine on which the server is
613           running. If the value begins with a slash, it is used as the
614           directory for the Unix domain socket. The default is taken from the
615           PGHOST environment variable, if set, else a Unix domain socket
616           connection is attempted.
617
618       -p port
619       --port=port
620           Specifies the TCP port or local Unix domain socket file extension
621           on which the server is listening for connections. Defaults to the
622           PGPORT environment variable, if set, or a compiled-in default.
623
624       -U username
625       --username=username
626           User name to connect as.
627
628       -w
629       --no-password
630           Never issue a password prompt. If the server requires password
631           authentication and a password is not available by other means such
632           as a .pgpass file, the connection attempt will fail. This option
633           can be useful in batch jobs and scripts where no user is present to
634           enter a password.
635
636       -W
637       --password
638           Force pg_dump to prompt for a password before connecting to a
639           database.
640
641           This option is never essential, since pg_dump will automatically
642           prompt for a password if the server demands password
643           authentication. However, pg_dump will waste a connection attempt
644           finding out that the server wants a password. In some cases it is
645           worth typing -W to avoid the extra connection attempt.
646
647       --role=rolename
648           Specifies a role name to be used to create the dump. This option
649           causes pg_dump to issue a SET ROLE rolename command after
650           connecting to the database. It is useful when the authenticated
651           user (specified by -U) lacks privileges needed by pg_dump, but can
652           switch to a role with the required rights. Some installations have
653           a policy against logging in directly as a superuser, and use of
654           this option allows dumps to be made without violating the policy.
655

ENVIRONMENT

657       PGDATABASE
658       PGHOST
659       PGOPTIONS
660       PGPORT
661       PGUSER
662           Default connection parameters.
663
664       PG_COLOR
665           Specifies whether to use color in diagnostic messages. Possible
666           values are always, auto and never.
667
668       This utility, like most other PostgreSQL utilities, also uses the
669       environment variables supported by libpq (see Section 33.14).
670

DIAGNOSTICS

672       pg_dump internally executes SELECT statements. If you have problems
673       running pg_dump, make sure you are able to select information from the
674       database using, for example, psql(1). Also, any default connection
675       settings and environment variables used by the libpq front-end library
676       will apply.
677
678       The database activity of pg_dump is normally collected by the
679       statistics collector. If this is undesirable, you can set parameter
680       track_counts to false via PGOPTIONS or the ALTER USER command.
681

NOTES

683       If your database cluster has any local additions to the template1
684       database, be careful to restore the output of pg_dump into a truly
685       empty database; otherwise you are likely to get errors due to duplicate
686       definitions of the added objects. To make an empty database without any
687       local additions, copy from template0 not template1, for example:
688
689           CREATE DATABASE foo WITH TEMPLATE template0;
690
691       When a data-only dump is chosen and the option --disable-triggers is
692       used, pg_dump emits commands to disable triggers on user tables before
693       inserting the data, and then commands to re-enable them after the data
694       has been inserted. If the restore is stopped in the middle, the system
695       catalogs might be left in the wrong state.
696
697       The dump file produced by pg_dump does not contain the statistics used
698       by the optimizer to make query planning decisions. Therefore, it is
699       wise to run ANALYZE after restoring from a dump file to ensure optimal
700       performance; see Section 24.1.3 and Section 24.1.6 for more
701       information.
702
703       Because pg_dump is used to transfer data to newer versions of
704       PostgreSQL, the output of pg_dump can be expected to load into
705       PostgreSQL server versions newer than pg_dump's version.  pg_dump can
706       also dump from PostgreSQL servers older than its own version.
707       (Currently, servers back to version 8.0 are supported.) However,
708       pg_dump cannot dump from PostgreSQL servers newer than its own major
709       version; it will refuse to even try, rather than risk making an invalid
710       dump. Also, it is not guaranteed that pg_dump's output can be loaded
711       into a server of an older major version — not even if the dump was
712       taken from a server of that version. Loading a dump file into an older
713       server may require manual editing of the dump file to remove syntax not
714       understood by the older server. Use of the --quote-all-identifiers
715       option is recommended in cross-version cases, as it can prevent
716       problems arising from varying reserved-word lists in different
717       PostgreSQL versions.
718
719       When dumping logical replication subscriptions, pg_dump will generate
720       CREATE SUBSCRIPTION commands that use the connect = false option, so
721       that restoring the subscription does not make remote connections for
722       creating a replication slot or for initial table copy. That way, the
723       dump can be restored without requiring network access to the remote
724       servers. It is then up to the user to reactivate the subscriptions in a
725       suitable way. If the involved hosts have changed, the connection
726       information might have to be changed. It might also be appropriate to
727       truncate the target tables before initiating a new full table copy.
728

EXAMPLES

730       To dump a database called mydb into a SQL-script file:
731
732           $ pg_dump mydb > db.sql
733
734       To reload such a script into a (freshly created) database named newdb:
735
736           $ psql -d newdb -f db.sql
737
738       To dump a database into a custom-format archive file:
739
740           $ pg_dump -Fc mydb > db.dump
741
742       To dump a database into a directory-format archive:
743
744           $ pg_dump -Fd mydb -f dumpdir
745
746       To dump a database into a directory-format archive in parallel with 5
747       worker jobs:
748
749           $ pg_dump -Fd mydb -j 5 -f dumpdir
750
751       To reload an archive file into a (freshly created) database named
752       newdb:
753
754           $ pg_restore -d newdb db.dump
755
756       To reload an archive file into the same database it was dumped from,
757       discarding the current contents of that database:
758
759           $ pg_restore -d postgres --clean --create db.dump
760
761       To dump a single table named mytab:
762
763           $ pg_dump -t mytab mydb > db.sql
764
765       To dump all tables whose names start with emp in the detroit schema,
766       except for the table named employee_log:
767
768           $ pg_dump -t 'detroit.emp*' -T detroit.employee_log mydb > db.sql
769
770       To dump all schemas whose names start with east or west and end in gsm,
771       excluding any schemas whose names contain the word test:
772
773           $ pg_dump -n 'east*gsm' -n 'west*gsm' -N '*test*' mydb > db.sql
774
775       The same, using regular expression notation to consolidate the
776       switches:
777
778           $ pg_dump -n '(east|west)*gsm' -N '*test*' mydb > db.sql
779
780       To dump all database objects except for tables whose names begin with
781       ts_:
782
783           $ pg_dump -T 'ts_*' mydb > db.sql
784
785       To specify an upper-case or mixed-case name in -t and related switches,
786       you need to double-quote the name; else it will be folded to lower case
787       (see Patterns). But double quotes are special to the shell, so in turn
788       they must be quoted. Thus, to dump a single table with a mixed-case
789       name, you need something like
790
791           $ pg_dump -t "\"MixedCaseName\"" mydb > mytab.sql
792

SEE ALSO

794       pg_dumpall(1), pg_restore(1), psql(1)
795
796
797
798PostgreSQL 12.6                      2021                           PG_DUMP(1)
Impressum