1PG_DUMP(1)               PostgreSQL 14.3 Documentation              PG_DUMP(1)
2
3
4

NAME

6       pg_dump - extract a PostgreSQL database into a script file or other
7       archive file
8

SYNOPSIS

10       pg_dump [connection-option...] [option...] [dbname]
11

DESCRIPTION

13       pg_dump is a utility for backing up a PostgreSQL database. It makes
14       consistent backups even if the database is being used concurrently.
15       pg_dump does not block other users accessing the database (readers or
16       writers).
17
18       pg_dump only dumps a single database. To back up an entire cluster, or
19       to back up global objects that are common to all databases in a cluster
20       (such as roles and tablespaces), use pg_dumpall(1).
21
22       Dumps can be output in script or archive file formats. Script dumps are
23       plain-text files containing the SQL commands required to reconstruct
24       the database to the state it was in at the time it was saved. To
25       restore from such a script, feed it to psql(1). Script files can be
26       used to reconstruct the database even on other machines and other
27       architectures; with some modifications, even on other SQL database
28       products.
29
30       The alternative archive file formats must be used with pg_restore(1) to
31       rebuild the database. They allow pg_restore to be selective about what
32       is restored, or even to reorder the items prior to being restored. The
33       archive file formats are designed to be portable across architectures.
34
35       When used with one of the archive file formats and combined with
36       pg_restore, pg_dump provides a flexible archival and transfer
37       mechanism.  pg_dump can be used to backup an entire database, then
38       pg_restore can be used to examine the archive and/or select which parts
39       of the database are to be restored. The most flexible output file
40       formats are the “custom” format (-Fc) and the “directory” format (-Fd).
41       They allow for selection and reordering of all archived items, support
42       parallel restoration, and are compressed by default. The “directory”
43       format is the only format that supports parallel dumps.
44
45       While running pg_dump, one should examine the output for any warnings
46       (printed on standard error), especially in light of the limitations
47       listed below.
48

OPTIONS

50       The following command-line options control the content and format of
51       the output.
52
53       dbname
54           Specifies the name of the database to be dumped. If this is not
55           specified, the environment variable PGDATABASE is used. If that is
56           not set, the user name specified for the connection is used.
57
58       -a
59       --data-only
60           Dump only the data, not the schema (data definitions). Table data,
61           large objects, and sequence values are dumped.
62
63           This option is similar to, but for historical reasons not identical
64           to, specifying --section=data.
65
66       -b
67       --blobs
68           Include large objects in the dump. This is the default behavior
69           except when --schema, --table, or --schema-only is specified. The
70           -b switch is therefore only useful to add large objects to dumps
71           where a specific schema or table has been requested. Note that
72           blobs are considered data and therefore will be included when
73           --data-only is used, but not when --schema-only is.
74
75       -B
76       --no-blobs
77           Exclude large objects in the dump.
78
79           When both -b and -B are given, the behavior is to output large
80           objects, when data is being dumped, see the -b documentation.
81
82       -c
83       --clean
84           Output commands to clean (drop) database objects prior to
85           outputting the commands for creating them. (Unless --if-exists is
86           also specified, restore might generate some harmless error
87           messages, if any objects were not present in the destination
88           database.)
89
90           This option is ignored when emitting an archive (non-text) output
91           file. For the archive formats, you can specify the option when you
92           call pg_restore.
93
94       -C
95       --create
96           Begin the output with a command to create the database itself and
97           reconnect to the created database. (With a script of this form, it
98           doesn't matter which database in the destination installation you
99           connect to before running the script.) If --clean is also
100           specified, the script drops and recreates the target database
101           before reconnecting to it.
102
103           With --create, the output also includes the database's comment if
104           any, and any configuration variable settings that are specific to
105           this database, that is, any ALTER DATABASE ... SET ...  and ALTER
106           ROLE ... IN DATABASE ... SET ...  commands that mention this
107           database. Access privileges for the database itself are also
108           dumped, unless --no-acl is specified.
109
110           This option is ignored when emitting an archive (non-text) output
111           file. For the archive formats, you can specify the option when you
112           call pg_restore.
113
114       -e pattern
115       --extension=pattern
116           Dump only extensions matching pattern. When this option is not
117           specified, all non-system extensions in the target database will be
118           dumped. Multiple extensions can be selected by writing multiple -e
119           switches. The pattern parameter is interpreted as a pattern
120           according to the same rules used by psql's \d commands (see
121           Patterns), so multiple extensions can also be selected by writing
122           wildcard characters in the pattern. When using wildcards, be
123           careful to quote the pattern if needed to prevent the shell from
124           expanding the wildcards.
125
126           Any configuration relation registered by pg_extension_config_dump
127           is included in the dump if its extension is specified by
128           --extension.
129
130               Note
131               When -e is specified, pg_dump makes no attempt to dump any
132               other database objects that the selected extension(s) might
133               depend upon. Therefore, there is no guarantee that the results
134               of a specific-extension dump can be successfully restored by
135               themselves into a clean database.
136
137       -E encoding
138       --encoding=encoding
139           Create the dump in the specified character set encoding. By
140           default, the dump is created in the database encoding. (Another way
141           to get the same result is to set the PGCLIENTENCODING environment
142           variable to the desired dump encoding.) The supported encodings are
143           described in Section 24.3.1.
144
145       -f file
146       --file=file
147           Send output to the specified file. This parameter can be omitted
148           for file based output formats, in which case the standard output is
149           used. It must be given for the directory output format however,
150           where it specifies the target directory instead of a file. In this
151           case the directory is created by pg_dump and must not exist before.
152
153       -F format
154       --format=format
155           Selects the format of the output.  format can be one of the
156           following:
157
158           p
159           plain
160               Output a plain-text SQL script file (the default).
161
162           c
163           custom
164               Output a custom-format archive suitable for input into
165               pg_restore. Together with the directory output format, this is
166               the most flexible output format in that it allows manual
167               selection and reordering of archived items during restore. This
168               format is also compressed by default.
169
170           d
171           directory
172               Output a directory-format archive suitable for input into
173               pg_restore. This will create a directory with one file for each
174               table and blob being dumped, plus a so-called Table of Contents
175               file describing the dumped objects in a machine-readable format
176               that pg_restore can read. A directory format archive can be
177               manipulated with standard Unix tools; for example, files in an
178               uncompressed archive can be compressed with the gzip tool. This
179               format is compressed by default and also supports parallel
180               dumps.
181
182           t
183           tar
184               Output a tar-format archive suitable for input into pg_restore.
185               The tar format is compatible with the directory format:
186               extracting a tar-format archive produces a valid
187               directory-format archive. However, the tar format does not
188               support compression. Also, when using tar format the relative
189               order of table data items cannot be changed during restore.
190
191       -j njobs
192       --jobs=njobs
193           Run the dump in parallel by dumping njobs tables simultaneously.
194           This option may reduce the time needed to perform the dump but it
195           also increases the load on the database server. You can only use
196           this option with the directory output format because this is the
197           only output format where multiple processes can write their data at
198           the same time.
199
200           pg_dump will open njobs + 1 connections to the database, so make
201           sure your max_connections setting is high enough to accommodate all
202           connections.
203
204           Requesting exclusive locks on database objects while running a
205           parallel dump could cause the dump to fail. The reason is that the
206           pg_dump leader process requests shared locks on the objects that
207           the worker processes are going to dump later in order to make sure
208           that nobody deletes them and makes them go away while the dump is
209           running. If another client then requests an exclusive lock on a
210           table, that lock will not be granted but will be queued waiting for
211           the shared lock of the leader process to be released. Consequently
212           any other access to the table will not be granted either and will
213           queue after the exclusive lock request. This includes the worker
214           process trying to dump the table. Without any precautions this
215           would be a classic deadlock situation. To detect this conflict, the
216           pg_dump worker process requests another shared lock using the
217           NOWAIT option. If the worker process is not granted this shared
218           lock, somebody else must have requested an exclusive lock in the
219           meantime and there is no way to continue with the dump, so pg_dump
220           has no choice but to abort the dump.
221
222           For a consistent backup, the database server needs to support
223           synchronized snapshots, a feature that was introduced in PostgreSQL
224           9.2 for primary servers and 10 for standbys. With this feature,
225           database clients can ensure they see the same data set even though
226           they use different connections.  pg_dump -j uses multiple database
227           connections; it connects to the database once with the leader
228           process and once again for each worker job. Without the
229           synchronized snapshot feature, the different worker jobs wouldn't
230           be guaranteed to see the same data in each connection, which could
231           lead to an inconsistent backup.
232
233           If you want to run a parallel dump of a pre-9.2 server, you need to
234           make sure that the database content doesn't change from between the
235           time the leader connects to the database until the last worker job
236           has connected to the database. The easiest way to do this is to
237           halt any data modifying processes (DDL and DML) accessing the
238           database before starting the backup. You also need to specify the
239           --no-synchronized-snapshots parameter when running pg_dump -j
240           against a pre-9.2 PostgreSQL server.
241
242       -n pattern
243       --schema=pattern
244           Dump only schemas matching pattern; this selects both the schema
245           itself, and all its contained objects. When this option is not
246           specified, all non-system schemas in the target database will be
247           dumped. Multiple schemas can be selected by writing multiple -n
248           switches. The pattern parameter is interpreted as a pattern
249           according to the same rules used by psql's \d commands (see
250           Patterns below), so multiple schemas can also be selected by
251           writing wildcard characters in the pattern. When using wildcards,
252           be careful to quote the pattern if needed to prevent the shell from
253           expanding the wildcards; see Examples below.
254
255               Note
256               When -n is specified, pg_dump makes no attempt to dump any
257               other database objects that the selected schema(s) might depend
258               upon. Therefore, there is no guarantee that the results of a
259               specific-schema dump can be successfully restored by themselves
260               into a clean database.
261
262               Note
263               Non-schema objects such as blobs are not dumped when -n is
264               specified. You can add blobs back to the dump with the --blobs
265               switch.
266
267       -N pattern
268       --exclude-schema=pattern
269           Do not dump any schemas matching pattern. The pattern is
270           interpreted according to the same rules as for -n.  -N can be given
271           more than once to exclude schemas matching any of several patterns.
272
273           When both -n and -N are given, the behavior is to dump just the
274           schemas that match at least one -n switch but no -N switches. If -N
275           appears without -n, then schemas matching -N are excluded from what
276           is otherwise a normal dump.
277
278       -O
279       --no-owner
280           Do not output commands to set ownership of objects to match the
281           original database. By default, pg_dump issues ALTER OWNER or SET
282           SESSION AUTHORIZATION statements to set ownership of created
283           database objects. These statements will fail when the script is run
284           unless it is started by a superuser (or the same user that owns all
285           of the objects in the script). To make a script that can be
286           restored by any user, but will give that user ownership of all the
287           objects, specify -O.
288
289           This option is ignored when emitting an archive (non-text) output
290           file. For the archive formats, you can specify the option when you
291           call pg_restore.
292
293       -R
294       --no-reconnect
295           This option is obsolete but still accepted for backwards
296           compatibility.
297
298       -s
299       --schema-only
300           Dump only the object definitions (schema), not data.
301
302           This option is the inverse of --data-only. It is similar to, but
303           for historical reasons not identical to, specifying
304           --section=pre-data --section=post-data.
305
306           (Do not confuse this with the --schema option, which uses the word
307           “schema” in a different meaning.)
308
309           To exclude table data for only a subset of tables in the database,
310           see --exclude-table-data.
311
312       -S username
313       --superuser=username
314           Specify the superuser user name to use when disabling triggers.
315           This is relevant only if --disable-triggers is used. (Usually, it's
316           better to leave this out, and instead start the resulting script as
317           superuser.)
318
319       -t pattern
320       --table=pattern
321           Dump only tables with names matching pattern. Multiple tables can
322           be selected by writing multiple -t switches. The pattern parameter
323           is interpreted as a pattern according to the same rules used by
324           psql's \d commands (see Patterns below), so multiple tables can
325           also be selected by writing wildcard characters in the pattern.
326           When using wildcards, be careful to quote the pattern if needed to
327           prevent the shell from expanding the wildcards; see Examples below.
328
329           As well as tables, this option can be used to dump the definition
330           of matching views, materialized views, foreign tables, and
331           sequences. It will not dump the contents of views or materialized
332           views, and the contents of foreign tables will only be dumped if
333           the corresponding foreign server is specified with
334           --include-foreign-data.
335
336           The -n and -N switches have no effect when -t is used, because
337           tables selected by -t will be dumped regardless of those switches,
338           and non-table objects will not be dumped.
339
340               Note
341               When -t is specified, pg_dump makes no attempt to dump any
342               other database objects that the selected table(s) might depend
343               upon. Therefore, there is no guarantee that the results of a
344               specific-table dump can be successfully restored by themselves
345               into a clean database.
346
347       -T pattern
348       --exclude-table=pattern
349           Do not dump any tables matching pattern. The pattern is interpreted
350           according to the same rules as for -t.  -T can be given more than
351           once to exclude tables matching any of several patterns.
352
353           When both -t and -T are given, the behavior is to dump just the
354           tables that match at least one -t switch but no -T switches. If -T
355           appears without -t, then tables matching -T are excluded from what
356           is otherwise a normal dump.
357
358       -v
359       --verbose
360           Specifies verbose mode. This will cause pg_dump to output detailed
361           object comments and start/stop times to the dump file, and progress
362           messages to standard error. Repeating the option causes additional
363           debug-level messages to appear on standard error.
364
365       -V
366       --version
367           Print the pg_dump version and exit.
368
369       -x
370       --no-privileges
371       --no-acl
372           Prevent dumping of access privileges (grant/revoke commands).
373
374       -Z 0..9
375       --compress=0..9
376           Specify the compression level to use. Zero means no compression.
377           For the custom and directory archive formats, this specifies
378           compression of individual table-data segments, and the default is
379           to compress at a moderate level. For plain text output, setting a
380           nonzero compression level causes the entire output file to be
381           compressed, as though it had been fed through gzip; but the default
382           is not to compress. The tar archive format currently does not
383           support compression at all.
384
385       --binary-upgrade
386           This option is for use by in-place upgrade utilities. Its use for
387           other purposes is not recommended or supported. The behavior of the
388           option may change in future releases without notice.
389
390       --column-inserts
391       --attribute-inserts
392           Dump data as INSERT commands with explicit column names (INSERT
393           INTO table (column, ...) VALUES ...). This will make restoration
394           very slow; it is mainly useful for making dumps that can be loaded
395           into non-PostgreSQL databases. Any error during reloading will
396           cause only rows that are part of the problematic INSERT to be lost,
397           rather than the entire table contents.
398
399       --disable-dollar-quoting
400           This option disables the use of dollar quoting for function bodies,
401           and forces them to be quoted using SQL standard string syntax.
402
403       --disable-triggers
404           This option is relevant only when creating a data-only dump. It
405           instructs pg_dump to include commands to temporarily disable
406           triggers on the target tables while the data is reloaded. Use this
407           if you have referential integrity checks or other triggers on the
408           tables that you do not want to invoke during data reload.
409
410           Presently, the commands emitted for --disable-triggers must be done
411           as superuser. So, you should also specify a superuser name with -S,
412           or preferably be careful to start the resulting script as a
413           superuser.
414
415           This option is ignored when emitting an archive (non-text) output
416           file. For the archive formats, you can specify the option when you
417           call pg_restore.
418
419       --enable-row-security
420           This option is relevant only when dumping the contents of a table
421           which has row security. By default, pg_dump will set row_security
422           to off, to ensure that all data is dumped from the table. If the
423           user does not have sufficient privileges to bypass row security,
424           then an error is thrown. This parameter instructs pg_dump to set
425           row_security to on instead, allowing the user to dump the parts of
426           the contents of the table that they have access to.
427
428           Note that if you use this option currently, you probably also want
429           the dump be in INSERT format, as the COPY FROM during restore does
430           not support row security.
431
432       --exclude-table-data=pattern
433           Do not dump data for any tables matching pattern. The pattern is
434           interpreted according to the same rules as for -t.
435           --exclude-table-data can be given more than once to exclude tables
436           matching any of several patterns. This option is useful when you
437           need the definition of a particular table even though you do not
438           need the data in it.
439
440           To exclude data for all tables in the database, see --schema-only.
441
442       --extra-float-digits=ndigits
443           Use the specified value of extra_float_digits when dumping
444           floating-point data, instead of the maximum available precision.
445           Routine dumps made for backup purposes should not use this option.
446
447       --if-exists
448           Use conditional commands (i.e., add an IF EXISTS clause) when
449           cleaning database objects. This option is not valid unless --clean
450           is also specified.
451
452       --include-foreign-data=foreignserver
453           Dump the data for any foreign table with a foreign server matching
454           foreignserver pattern. Multiple foreign servers can be selected by
455           writing multiple --include-foreign-data switches. Also, the
456           foreignserver parameter is interpreted as a pattern according to
457           the same rules used by psql's \d commands (see Patterns below), so
458           multiple foreign servers can also be selected by writing wildcard
459           characters in the pattern. When using wildcards, be careful to
460           quote the pattern if needed to prevent the shell from expanding the
461           wildcards; see Examples below. The only exception is that an empty
462           pattern is disallowed.
463
464               Note
465               When --include-foreign-data is specified, pg_dump does not
466               check that the foreign table is writable. Therefore, there is
467               no guarantee that the results of a foreign table dump can be
468               successfully restored.
469
470       --inserts
471           Dump data as INSERT commands (rather than COPY). This will make
472           restoration very slow; it is mainly useful for making dumps that
473           can be loaded into non-PostgreSQL databases. Any error during
474           reloading will cause only rows that are part of the problematic
475           INSERT to be lost, rather than the entire table contents. Note that
476           the restore might fail altogether if you have rearranged column
477           order. The --column-inserts option is safe against column order
478           changes, though even slower.
479
480       --load-via-partition-root
481           When dumping data for a table partition, make the COPY or INSERT
482           statements target the root of the partitioning hierarchy that
483           contains it, rather than the partition itself. This causes the
484           appropriate partition to be re-determined for each row when the
485           data is loaded. This may be useful when reloading data on a server
486           where rows do not always fall into the same partitions as they did
487           on the original server. That could happen, for example, if the
488           partitioning column is of type text and the two systems have
489           different definitions of the collation used to sort the
490           partitioning column.
491
492           It is best not to use parallelism when restoring from an archive
493           made with this option, because pg_restore will not know exactly
494           which partition(s) a given archive data item will load data into.
495           This could result in inefficiency due to lock conflicts between
496           parallel jobs, or perhaps even reload failures due to foreign key
497           constraints being set up before all the relevant data is loaded.
498
499       --lock-wait-timeout=timeout
500           Do not wait forever to acquire shared table locks at the beginning
501           of the dump. Instead fail if unable to lock a table within the
502           specified timeout. The timeout may be specified in any of the
503           formats accepted by SET statement_timeout. (Allowed formats vary
504           depending on the server version you are dumping from, but an
505           integer number of milliseconds is accepted by all versions.)
506
507       --no-comments
508           Do not dump comments.
509
510       --no-publications
511           Do not dump publications.
512
513       --no-security-labels
514           Do not dump security labels.
515
516       --no-subscriptions
517           Do not dump subscriptions.
518
519       --no-sync
520           By default, pg_dump will wait for all files to be written safely to
521           disk. This option causes pg_dump to return without waiting, which
522           is faster, but means that a subsequent operating system crash can
523           leave the dump corrupt. Generally, this option is useful for
524           testing but should not be used when dumping data from production
525           installation.
526
527       --no-synchronized-snapshots
528           This option allows running pg_dump -j against a pre-9.2 server, see
529           the documentation of the -j parameter for more details.
530
531       --no-tablespaces
532           Do not output commands to select tablespaces. With this option, all
533           objects will be created in whichever tablespace is the default
534           during restore.
535
536           This option is ignored when emitting an archive (non-text) output
537           file. For the archive formats, you can specify the option when you
538           call pg_restore.
539
540       --no-toast-compression
541           Do not output commands to set TOAST compression methods. With this
542           option, all columns will be restored with the default compression
543           setting.
544
545       --no-unlogged-table-data
546           Do not dump the contents of unlogged tables. This option has no
547           effect on whether or not the table definitions (schema) are dumped;
548           it only suppresses dumping the table data. Data in unlogged tables
549           is always excluded when dumping from a standby server.
550
551       --on-conflict-do-nothing
552           Add ON CONFLICT DO NOTHING to INSERT commands. This option is not
553           valid unless --inserts, --column-inserts or --rows-per-insert is
554           also specified.
555
556       --quote-all-identifiers
557           Force quoting of all identifiers. This option is recommended when
558           dumping a database from a server whose PostgreSQL major version is
559           different from pg_dump's, or when the output is intended to be
560           loaded into a server of a different major version. By default,
561           pg_dump quotes only identifiers that are reserved words in its own
562           major version. This sometimes results in compatibility issues when
563           dealing with servers of other versions that may have slightly
564           different sets of reserved words. Using --quote-all-identifiers
565           prevents such issues, at the price of a harder-to-read dump script.
566
567       --rows-per-insert=nrows
568           Dump data as INSERT commands (rather than COPY). Controls the
569           maximum number of rows per INSERT command. The value specified must
570           be a number greater than zero. Any error during reloading will
571           cause only rows that are part of the problematic INSERT to be lost,
572           rather than the entire table contents.
573
574       --section=sectionname
575           Only dump the named section. The section name can be pre-data,
576           data, or post-data. This option can be specified more than once to
577           select multiple sections. The default is to dump all sections.
578
579           The data section contains actual table data, large-object contents,
580           and sequence values. Post-data items include definitions of
581           indexes, triggers, rules, and constraints other than validated
582           check constraints. Pre-data items include all other data definition
583           items.
584
585       --serializable-deferrable
586           Use a serializable transaction for the dump, to ensure that the
587           snapshot used is consistent with later database states; but do this
588           by waiting for a point in the transaction stream at which no
589           anomalies can be present, so that there isn't a risk of the dump
590           failing or causing other transactions to roll back with a
591           serialization_failure. See Chapter 13 for more information about
592           transaction isolation and concurrency control.
593
594           This option is not beneficial for a dump which is intended only for
595           disaster recovery. It could be useful for a dump used to load a
596           copy of the database for reporting or other read-only load sharing
597           while the original database continues to be updated. Without it the
598           dump may reflect a state which is not consistent with any serial
599           execution of the transactions eventually committed. For example, if
600           batch processing techniques are used, a batch may show as closed in
601           the dump without all of the items which are in the batch appearing.
602
603           This option will make no difference if there are no read-write
604           transactions active when pg_dump is started. If read-write
605           transactions are active, the start of the dump may be delayed for
606           an indeterminate length of time. Once running, performance with or
607           without the switch is the same.
608
609       --snapshot=snapshotname
610           Use the specified synchronized snapshot when making a dump of the
611           database (see Table 9.90 for more details).
612
613           This option is useful when needing to synchronize the dump with a
614           logical replication slot (see Chapter 49) or with a concurrent
615           session.
616
617           In the case of a parallel dump, the snapshot name defined by this
618           option is used rather than taking a new snapshot.
619
620       --strict-names
621           Require that each extension (-e/--extension), schema (-n/--schema)
622           and table (-t/--table) qualifier match at least one
623           extension/schema/table in the database to be dumped. Note that if
624           none of the extension/schema/table qualifiers find matches, pg_dump
625           will generate an error even without --strict-names.
626
627           This option has no effect on -N/--exclude-schema,
628           -T/--exclude-table, or --exclude-table-data. An exclude pattern
629           failing to match any objects is not considered an error.
630
631       --use-set-session-authorization
632           Output SQL-standard SET SESSION AUTHORIZATION commands instead of
633           ALTER OWNER commands to determine object ownership. This makes the
634           dump more standards-compatible, but depending on the history of the
635           objects in the dump, might not restore properly. Also, a dump using
636           SET SESSION AUTHORIZATION will certainly require superuser
637           privileges to restore correctly, whereas ALTER OWNER requires
638           lesser privileges.
639
640       -?
641       --help
642           Show help about pg_dump command line arguments, and exit.
643
644       The following command-line options control the database connection
645       parameters.
646
647       -d dbname
648       --dbname=dbname
649           Specifies the name of the database to connect to. This is
650           equivalent to specifying dbname as the first non-option argument on
651           the command line. The dbname can be a connection string. If so,
652           connection string parameters will override any conflicting command
653           line options.
654
655       -h host
656       --host=host
657           Specifies the host name of the machine on which the server is
658           running. If the value begins with a slash, it is used as the
659           directory for the Unix domain socket. The default is taken from the
660           PGHOST environment variable, if set, else a Unix domain socket
661           connection is attempted.
662
663       -p port
664       --port=port
665           Specifies the TCP port or local Unix domain socket file extension
666           on which the server is listening for connections. Defaults to the
667           PGPORT environment variable, if set, or a compiled-in default.
668
669       -U username
670       --username=username
671           User name to connect as.
672
673       -w
674       --no-password
675           Never issue a password prompt. If the server requires password
676           authentication and a password is not available by other means such
677           as a .pgpass file, the connection attempt will fail. This option
678           can be useful in batch jobs and scripts where no user is present to
679           enter a password.
680
681       -W
682       --password
683           Force pg_dump to prompt for a password before connecting to a
684           database.
685
686           This option is never essential, since pg_dump will automatically
687           prompt for a password if the server demands password
688           authentication. However, pg_dump will waste a connection attempt
689           finding out that the server wants a password. In some cases it is
690           worth typing -W to avoid the extra connection attempt.
691
692       --role=rolename
693           Specifies a role name to be used to create the dump. This option
694           causes pg_dump to issue a SET ROLE rolename command after
695           connecting to the database. It is useful when the authenticated
696           user (specified by -U) lacks privileges needed by pg_dump, but can
697           switch to a role with the required rights. Some installations have
698           a policy against logging in directly as a superuser, and use of
699           this option allows dumps to be made without violating the policy.
700

ENVIRONMENT

702       PGDATABASE
703       PGHOST
704       PGOPTIONS
705       PGPORT
706       PGUSER
707           Default connection parameters.
708
709       PG_COLOR
710           Specifies whether to use color in diagnostic messages. Possible
711           values are always, auto and never.
712
713       This utility, like most other PostgreSQL utilities, also uses the
714       environment variables supported by libpq (see Section 34.15).
715

DIAGNOSTICS

717       pg_dump internally executes SELECT statements. If you have problems
718       running pg_dump, make sure you are able to select information from the
719       database using, for example, psql(1). Also, any default connection
720       settings and environment variables used by the libpq front-end library
721       will apply.
722
723       The database activity of pg_dump is normally collected by the
724       statistics collector. If this is undesirable, you can set parameter
725       track_counts to false via PGOPTIONS or the ALTER USER command.
726

NOTES

728       If your database cluster has any local additions to the template1
729       database, be careful to restore the output of pg_dump into a truly
730       empty database; otherwise you are likely to get errors due to duplicate
731       definitions of the added objects. To make an empty database without any
732       local additions, copy from template0 not template1, for example:
733
734           CREATE DATABASE foo WITH TEMPLATE template0;
735
736       When a data-only dump is chosen and the option --disable-triggers is
737       used, pg_dump emits commands to disable triggers on user tables before
738       inserting the data, and then commands to re-enable them after the data
739       has been inserted. If the restore is stopped in the middle, the system
740       catalogs might be left in the wrong state.
741
742       The dump file produced by pg_dump does not contain the statistics used
743       by the optimizer to make query planning decisions. Therefore, it is
744       wise to run ANALYZE after restoring from a dump file to ensure optimal
745       performance; see Section 25.1.3 and Section 25.1.6 for more
746       information.
747
748       Because pg_dump is used to transfer data to newer versions of
749       PostgreSQL, the output of pg_dump can be expected to load into
750       PostgreSQL server versions newer than pg_dump's version.  pg_dump can
751       also dump from PostgreSQL servers older than its own version.
752       (Currently, servers back to version 8.0 are supported.) However,
753       pg_dump cannot dump from PostgreSQL servers newer than its own major
754       version; it will refuse to even try, rather than risk making an invalid
755       dump. Also, it is not guaranteed that pg_dump's output can be loaded
756       into a server of an older major version — not even if the dump was
757       taken from a server of that version. Loading a dump file into an older
758       server may require manual editing of the dump file to remove syntax not
759       understood by the older server. Use of the --quote-all-identifiers
760       option is recommended in cross-version cases, as it can prevent
761       problems arising from varying reserved-word lists in different
762       PostgreSQL versions.
763
764       When dumping logical replication subscriptions, pg_dump will generate
765       CREATE SUBSCRIPTION commands that use the connect = false option, so
766       that restoring the subscription does not make remote connections for
767       creating a replication slot or for initial table copy. That way, the
768       dump can be restored without requiring network access to the remote
769       servers. It is then up to the user to reactivate the subscriptions in a
770       suitable way. If the involved hosts have changed, the connection
771       information might have to be changed. It might also be appropriate to
772       truncate the target tables before initiating a new full table copy.
773

EXAMPLES

775       To dump a database called mydb into an SQL-script file:
776
777           $ pg_dump mydb > db.sql
778
779       To reload such a script into a (freshly created) database named newdb:
780
781           $ psql -d newdb -f db.sql
782
783       To dump a database into a custom-format archive file:
784
785           $ pg_dump -Fc mydb > db.dump
786
787       To dump a database into a directory-format archive:
788
789           $ pg_dump -Fd mydb -f dumpdir
790
791       To dump a database into a directory-format archive in parallel with 5
792       worker jobs:
793
794           $ pg_dump -Fd mydb -j 5 -f dumpdir
795
796       To reload an archive file into a (freshly created) database named
797       newdb:
798
799           $ pg_restore -d newdb db.dump
800
801       To reload an archive file into the same database it was dumped from,
802       discarding the current contents of that database:
803
804           $ pg_restore -d postgres --clean --create db.dump
805
806       To dump a single table named mytab:
807
808           $ pg_dump -t mytab mydb > db.sql
809
810       To dump all tables whose names start with emp in the detroit schema,
811       except for the table named employee_log:
812
813           $ pg_dump -t 'detroit.emp*' -T detroit.employee_log mydb > db.sql
814
815       To dump all schemas whose names start with east or west and end in gsm,
816       excluding any schemas whose names contain the word test:
817
818           $ pg_dump -n 'east*gsm' -n 'west*gsm' -N '*test*' mydb > db.sql
819
820       The same, using regular expression notation to consolidate the
821       switches:
822
823           $ pg_dump -n '(east|west)*gsm' -N '*test*' mydb > db.sql
824
825       To dump all database objects except for tables whose names begin with
826       ts_:
827
828           $ pg_dump -T 'ts_*' mydb > db.sql
829
830       To specify an upper-case or mixed-case name in -t and related switches,
831       you need to double-quote the name; else it will be folded to lower case
832       (see Patterns below). But double quotes are special to the shell, so in
833       turn they must be quoted. Thus, to dump a single table with a
834       mixed-case name, you need something like
835
836           $ pg_dump -t "\"MixedCaseName\"" mydb > mytab.sql
837

SEE ALSO

839       pg_dumpall(1), pg_restore(1), psql(1)
840
841
842
843PostgreSQL 14.3                      2022                           PG_DUMP(1)
Impressum