1PG_DUMP(1) PostgreSQL 15.4 Documentation PG_DUMP(1)
2
3
4
6 pg_dump - extract a PostgreSQL database into a script file or other
7 archive file
8
10 pg_dump [connection-option...] [option...] [dbname]
11
13 pg_dump is a utility for backing up a PostgreSQL database. It makes
14 consistent backups even if the database is being used concurrently.
15 pg_dump does not block other users accessing the database (readers or
16 writers).
17
18 pg_dump only dumps a single database. To back up an entire cluster, or
19 to back up global objects that are common to all databases in a cluster
20 (such as roles and tablespaces), use pg_dumpall(1).
21
22 Dumps can be output in script or archive file formats. Script dumps are
23 plain-text files containing the SQL commands required to reconstruct
24 the database to the state it was in at the time it was saved. To
25 restore from such a script, feed it to psql(1). Script files can be
26 used to reconstruct the database even on other machines and other
27 architectures; with some modifications, even on other SQL database
28 products.
29
30 The alternative archive file formats must be used with pg_restore(1) to
31 rebuild the database. They allow pg_restore to be selective about what
32 is restored, or even to reorder the items prior to being restored. The
33 archive file formats are designed to be portable across architectures.
34
35 When used with one of the archive file formats and combined with
36 pg_restore, pg_dump provides a flexible archival and transfer
37 mechanism. pg_dump can be used to backup an entire database, then
38 pg_restore can be used to examine the archive and/or select which parts
39 of the database are to be restored. The most flexible output file
40 formats are the “custom” format (-Fc) and the “directory” format (-Fd).
41 They allow for selection and reordering of all archived items, support
42 parallel restoration, and are compressed by default. The “directory”
43 format is the only format that supports parallel dumps.
44
45 While running pg_dump, one should examine the output for any warnings
46 (printed on standard error), especially in light of the limitations
47 listed below.
48
50 The following command-line options control the content and format of
51 the output.
52
53 dbname
54 Specifies the name of the database to be dumped. If this is not
55 specified, the environment variable PGDATABASE is used. If that is
56 not set, the user name specified for the connection is used.
57
58 -a
59 --data-only
60 Dump only the data, not the schema (data definitions). Table data,
61 large objects, and sequence values are dumped.
62
63 This option is similar to, but for historical reasons not identical
64 to, specifying --section=data.
65
66 -b
67 --blobs
68 Include large objects in the dump. This is the default behavior
69 except when --schema, --table, or --schema-only is specified. The
70 -b switch is therefore only useful to add large objects to dumps
71 where a specific schema or table has been requested. Note that
72 blobs are considered data and therefore will be included when
73 --data-only is used, but not when --schema-only is.
74
75 -B
76 --no-blobs
77 Exclude large objects in the dump.
78
79 When both -b and -B are given, the behavior is to output large
80 objects, when data is being dumped, see the -b documentation.
81
82 -c
83 --clean
84 Output commands to clean (drop) database objects prior to
85 outputting the commands for creating them. (Unless --if-exists is
86 also specified, restore might generate some harmless error
87 messages, if any objects were not present in the destination
88 database.)
89
90 This option is ignored when emitting an archive (non-text) output
91 file. For the archive formats, you can specify the option when you
92 call pg_restore.
93
94 -C
95 --create
96 Begin the output with a command to create the database itself and
97 reconnect to the created database. (With a script of this form, it
98 doesn't matter which database in the destination installation you
99 connect to before running the script.) If --clean is also
100 specified, the script drops and recreates the target database
101 before reconnecting to it.
102
103 With --create, the output also includes the database's comment if
104 any, and any configuration variable settings that are specific to
105 this database, that is, any ALTER DATABASE ... SET ... and ALTER
106 ROLE ... IN DATABASE ... SET ... commands that mention this
107 database. Access privileges for the database itself are also
108 dumped, unless --no-acl is specified.
109
110 This option is ignored when emitting an archive (non-text) output
111 file. For the archive formats, you can specify the option when you
112 call pg_restore.
113
114 -e pattern
115 --extension=pattern
116 Dump only extensions matching pattern. When this option is not
117 specified, all non-system extensions in the target database will be
118 dumped. Multiple extensions can be selected by writing multiple -e
119 switches. The pattern parameter is interpreted as a pattern
120 according to the same rules used by psql's \d commands (see
121 Patterns), so multiple extensions can also be selected by writing
122 wildcard characters in the pattern. When using wildcards, be
123 careful to quote the pattern if needed to prevent the shell from
124 expanding the wildcards.
125
126 Any configuration relation registered by pg_extension_config_dump
127 is included in the dump if its extension is specified by
128 --extension.
129
130 Note
131 When -e is specified, pg_dump makes no attempt to dump any
132 other database objects that the selected extension(s) might
133 depend upon. Therefore, there is no guarantee that the results
134 of a specific-extension dump can be successfully restored by
135 themselves into a clean database.
136
137 -E encoding
138 --encoding=encoding
139 Create the dump in the specified character set encoding. By
140 default, the dump is created in the database encoding. (Another way
141 to get the same result is to set the PGCLIENTENCODING environment
142 variable to the desired dump encoding.) The supported encodings are
143 described in Section 24.3.1.
144
145 -f file
146 --file=file
147 Send output to the specified file. This parameter can be omitted
148 for file based output formats, in which case the standard output is
149 used. It must be given for the directory output format however,
150 where it specifies the target directory instead of a file. In this
151 case the directory is created by pg_dump and must not exist before.
152
153 -F format
154 --format=format
155 Selects the format of the output. format can be one of the
156 following:
157
158 p
159 plain
160 Output a plain-text SQL script file (the default).
161
162 c
163 custom
164 Output a custom-format archive suitable for input into
165 pg_restore. Together with the directory output format, this is
166 the most flexible output format in that it allows manual
167 selection and reordering of archived items during restore. This
168 format is also compressed by default.
169
170 d
171 directory
172 Output a directory-format archive suitable for input into
173 pg_restore. This will create a directory with one file for each
174 table and blob being dumped, plus a so-called Table of Contents
175 file describing the dumped objects in a machine-readable format
176 that pg_restore can read. A directory format archive can be
177 manipulated with standard Unix tools; for example, files in an
178 uncompressed archive can be compressed with the gzip tool. This
179 format is compressed by default and also supports parallel
180 dumps.
181
182 t
183 tar
184 Output a tar-format archive suitable for input into pg_restore.
185 The tar format is compatible with the directory format:
186 extracting a tar-format archive produces a valid
187 directory-format archive. However, the tar format does not
188 support compression. Also, when using tar format the relative
189 order of table data items cannot be changed during restore.
190
191 -j njobs
192 --jobs=njobs
193 Run the dump in parallel by dumping njobs tables simultaneously.
194 This option may reduce the time needed to perform the dump but it
195 also increases the load on the database server. You can only use
196 this option with the directory output format because this is the
197 only output format where multiple processes can write their data at
198 the same time.
199
200 pg_dump will open njobs + 1 connections to the database, so make
201 sure your max_connections setting is high enough to accommodate all
202 connections.
203
204 Requesting exclusive locks on database objects while running a
205 parallel dump could cause the dump to fail. The reason is that the
206 pg_dump leader process requests shared locks (ACCESS SHARE) on the
207 objects that the worker processes are going to dump later in order
208 to make sure that nobody deletes them and makes them go away while
209 the dump is running. If another client then requests an exclusive
210 lock on a table, that lock will not be granted but will be queued
211 waiting for the shared lock of the leader process to be released.
212 Consequently any other access to the table will not be granted
213 either and will queue after the exclusive lock request. This
214 includes the worker process trying to dump the table. Without any
215 precautions this would be a classic deadlock situation. To detect
216 this conflict, the pg_dump worker process requests another shared
217 lock using the NOWAIT option. If the worker process is not granted
218 this shared lock, somebody else must have requested an exclusive
219 lock in the meantime and there is no way to continue with the dump,
220 so pg_dump has no choice but to abort the dump.
221
222 To perform a parallel dump, the database server needs to support
223 synchronized snapshots, a feature that was introduced in PostgreSQL
224 9.2 for primary servers and 10 for standbys. With this feature,
225 database clients can ensure they see the same data set even though
226 they use different connections. pg_dump -j uses multiple database
227 connections; it connects to the database once with the leader
228 process and once again for each worker job. Without the
229 synchronized snapshot feature, the different worker jobs wouldn't
230 be guaranteed to see the same data in each connection, which could
231 lead to an inconsistent backup.
232
233 -n pattern
234 --schema=pattern
235 Dump only schemas matching pattern; this selects both the schema
236 itself, and all its contained objects. When this option is not
237 specified, all non-system schemas in the target database will be
238 dumped. Multiple schemas can be selected by writing multiple -n
239 switches. The pattern parameter is interpreted as a pattern
240 according to the same rules used by psql's \d commands (see
241 Patterns below), so multiple schemas can also be selected by
242 writing wildcard characters in the pattern. When using wildcards,
243 be careful to quote the pattern if needed to prevent the shell from
244 expanding the wildcards; see Examples below.
245
246 Note
247 When -n is specified, pg_dump makes no attempt to dump any
248 other database objects that the selected schema(s) might depend
249 upon. Therefore, there is no guarantee that the results of a
250 specific-schema dump can be successfully restored by themselves
251 into a clean database.
252
253 Note
254 Non-schema objects such as blobs are not dumped when -n is
255 specified. You can add blobs back to the dump with the --blobs
256 switch.
257
258 -N pattern
259 --exclude-schema=pattern
260 Do not dump any schemas matching pattern. The pattern is
261 interpreted according to the same rules as for -n. -N can be given
262 more than once to exclude schemas matching any of several patterns.
263
264 When both -n and -N are given, the behavior is to dump just the
265 schemas that match at least one -n switch but no -N switches. If -N
266 appears without -n, then schemas matching -N are excluded from what
267 is otherwise a normal dump.
268
269 -O
270 --no-owner
271 Do not output commands to set ownership of objects to match the
272 original database. By default, pg_dump issues ALTER OWNER or SET
273 SESSION AUTHORIZATION statements to set ownership of created
274 database objects. These statements will fail when the script is run
275 unless it is started by a superuser (or the same user that owns all
276 of the objects in the script). To make a script that can be
277 restored by any user, but will give that user ownership of all the
278 objects, specify -O.
279
280 This option is ignored when emitting an archive (non-text) output
281 file. For the archive formats, you can specify the option when you
282 call pg_restore.
283
284 -R
285 --no-reconnect
286 This option is obsolete but still accepted for backwards
287 compatibility.
288
289 -s
290 --schema-only
291 Dump only the object definitions (schema), not data.
292
293 This option is the inverse of --data-only. It is similar to, but
294 for historical reasons not identical to, specifying
295 --section=pre-data --section=post-data.
296
297 (Do not confuse this with the --schema option, which uses the word
298 “schema” in a different meaning.)
299
300 To exclude table data for only a subset of tables in the database,
301 see --exclude-table-data.
302
303 -S username
304 --superuser=username
305 Specify the superuser user name to use when disabling triggers.
306 This is relevant only if --disable-triggers is used. (Usually, it's
307 better to leave this out, and instead start the resulting script as
308 superuser.)
309
310 -t pattern
311 --table=pattern
312 Dump only tables with names matching pattern. Multiple tables can
313 be selected by writing multiple -t switches. The pattern parameter
314 is interpreted as a pattern according to the same rules used by
315 psql's \d commands (see Patterns below), so multiple tables can
316 also be selected by writing wildcard characters in the pattern.
317 When using wildcards, be careful to quote the pattern if needed to
318 prevent the shell from expanding the wildcards; see Examples below.
319
320 As well as tables, this option can be used to dump the definition
321 of matching views, materialized views, foreign tables, and
322 sequences. It will not dump the contents of views or materialized
323 views, and the contents of foreign tables will only be dumped if
324 the corresponding foreign server is specified with
325 --include-foreign-data.
326
327 The -n and -N switches have no effect when -t is used, because
328 tables selected by -t will be dumped regardless of those switches,
329 and non-table objects will not be dumped.
330
331 Note
332 When -t is specified, pg_dump makes no attempt to dump any
333 other database objects that the selected table(s) might depend
334 upon. Therefore, there is no guarantee that the results of a
335 specific-table dump can be successfully restored by themselves
336 into a clean database.
337
338 -T pattern
339 --exclude-table=pattern
340 Do not dump any tables matching pattern. The pattern is interpreted
341 according to the same rules as for -t. -T can be given more than
342 once to exclude tables matching any of several patterns.
343
344 When both -t and -T are given, the behavior is to dump just the
345 tables that match at least one -t switch but no -T switches. If -T
346 appears without -t, then tables matching -T are excluded from what
347 is otherwise a normal dump.
348
349 -v
350 --verbose
351 Specifies verbose mode. This will cause pg_dump to output detailed
352 object comments and start/stop times to the dump file, and progress
353 messages to standard error. Repeating the option causes additional
354 debug-level messages to appear on standard error.
355
356 -V
357 --version
358 Print the pg_dump version and exit.
359
360 -x
361 --no-privileges
362 --no-acl
363 Prevent dumping of access privileges (grant/revoke commands).
364
365 -Z 0..9
366 --compress=0..9
367 Specify the compression level to use. Zero means no compression.
368 For the custom and directory archive formats, this specifies
369 compression of individual table-data segments, and the default is
370 to compress at a moderate level. For plain text output, setting a
371 nonzero compression level causes the entire output file to be
372 compressed, as though it had been fed through gzip; but the default
373 is not to compress. The tar archive format currently does not
374 support compression at all.
375
376 --binary-upgrade
377 This option is for use by in-place upgrade utilities. Its use for
378 other purposes is not recommended or supported. The behavior of the
379 option may change in future releases without notice.
380
381 --column-inserts
382 --attribute-inserts
383 Dump data as INSERT commands with explicit column names (INSERT
384 INTO table (column, ...) VALUES ...). This will make restoration
385 very slow; it is mainly useful for making dumps that can be loaded
386 into non-PostgreSQL databases. Any error during restoring will
387 cause only rows that are part of the problematic INSERT to be lost,
388 rather than the entire table contents.
389
390 --disable-dollar-quoting
391 This option disables the use of dollar quoting for function bodies,
392 and forces them to be quoted using SQL standard string syntax.
393
394 --disable-triggers
395 This option is relevant only when creating a data-only dump. It
396 instructs pg_dump to include commands to temporarily disable
397 triggers on the target tables while the data is restored. Use this
398 if you have referential integrity checks or other triggers on the
399 tables that you do not want to invoke during data restore.
400
401 Presently, the commands emitted for --disable-triggers must be done
402 as superuser. So, you should also specify a superuser name with -S,
403 or preferably be careful to start the resulting script as a
404 superuser.
405
406 This option is ignored when emitting an archive (non-text) output
407 file. For the archive formats, you can specify the option when you
408 call pg_restore.
409
410 --enable-row-security
411 This option is relevant only when dumping the contents of a table
412 which has row security. By default, pg_dump will set row_security
413 to off, to ensure that all data is dumped from the table. If the
414 user does not have sufficient privileges to bypass row security,
415 then an error is thrown. This parameter instructs pg_dump to set
416 row_security to on instead, allowing the user to dump the parts of
417 the contents of the table that they have access to.
418
419 Note that if you use this option currently, you probably also want
420 the dump be in INSERT format, as the COPY FROM during restore does
421 not support row security.
422
423 --exclude-table-data=pattern
424 Do not dump data for any tables matching pattern. The pattern is
425 interpreted according to the same rules as for -t.
426 --exclude-table-data can be given more than once to exclude tables
427 matching any of several patterns. This option is useful when you
428 need the definition of a particular table even though you do not
429 need the data in it.
430
431 To exclude data for all tables in the database, see --schema-only.
432
433 --extra-float-digits=ndigits
434 Use the specified value of extra_float_digits when dumping
435 floating-point data, instead of the maximum available precision.
436 Routine dumps made for backup purposes should not use this option.
437
438 --if-exists
439 Use conditional commands (i.e., add an IF EXISTS clause) when
440 cleaning database objects. This option is not valid unless --clean
441 is also specified.
442
443 --include-foreign-data=foreignserver
444 Dump the data for any foreign table with a foreign server matching
445 foreignserver pattern. Multiple foreign servers can be selected by
446 writing multiple --include-foreign-data switches. Also, the
447 foreignserver parameter is interpreted as a pattern according to
448 the same rules used by psql's \d commands (see Patterns below), so
449 multiple foreign servers can also be selected by writing wildcard
450 characters in the pattern. When using wildcards, be careful to
451 quote the pattern if needed to prevent the shell from expanding the
452 wildcards; see Examples below. The only exception is that an empty
453 pattern is disallowed.
454
455 Note
456 When --include-foreign-data is specified, pg_dump does not
457 check that the foreign table is writable. Therefore, there is
458 no guarantee that the results of a foreign table dump can be
459 successfully restored.
460
461 --inserts
462 Dump data as INSERT commands (rather than COPY). This will make
463 restoration very slow; it is mainly useful for making dumps that
464 can be loaded into non-PostgreSQL databases. Any error during
465 restoring will cause only rows that are part of the problematic
466 INSERT to be lost, rather than the entire table contents. Note that
467 the restore might fail altogether if you have rearranged column
468 order. The --column-inserts option is safe against column order
469 changes, though even slower.
470
471 --load-via-partition-root
472 When dumping data for a table partition, make the COPY or INSERT
473 statements target the root of the partitioning hierarchy that
474 contains it, rather than the partition itself. This causes the
475 appropriate partition to be re-determined for each row when the
476 data is loaded. This may be useful when restoring data on a server
477 where rows do not always fall into the same partitions as they did
478 on the original server. That could happen, for example, if the
479 partitioning column is of type text and the two systems have
480 different definitions of the collation used to sort the
481 partitioning column.
482
483 --lock-wait-timeout=timeout
484 Do not wait forever to acquire shared table locks at the beginning
485 of the dump. Instead fail if unable to lock a table within the
486 specified timeout. The timeout may be specified in any of the
487 formats accepted by SET statement_timeout. (Allowed formats vary
488 depending on the server version you are dumping from, but an
489 integer number of milliseconds is accepted by all versions.)
490
491 --no-comments
492 Do not dump comments.
493
494 --no-publications
495 Do not dump publications.
496
497 --no-security-labels
498 Do not dump security labels.
499
500 --no-subscriptions
501 Do not dump subscriptions.
502
503 --no-sync
504 By default, pg_dump will wait for all files to be written safely to
505 disk. This option causes pg_dump to return without waiting, which
506 is faster, but means that a subsequent operating system crash can
507 leave the dump corrupt. Generally, this option is useful for
508 testing but should not be used when dumping data from production
509 installation.
510
511 --no-table-access-method
512 Do not output commands to select table access methods. With this
513 option, all objects will be created with whichever table access
514 method is the default during restore.
515
516 This option is ignored when emitting an archive (non-text) output
517 file. For the archive formats, you can specify the option when you
518 call pg_restore.
519
520 --no-tablespaces
521 Do not output commands to select tablespaces. With this option, all
522 objects will be created in whichever tablespace is the default
523 during restore.
524
525 This option is ignored when emitting an archive (non-text) output
526 file. For the archive formats, you can specify the option when you
527 call pg_restore.
528
529 --no-toast-compression
530 Do not output commands to set TOAST compression methods. With this
531 option, all columns will be restored with the default compression
532 setting.
533
534 --no-unlogged-table-data
535 Do not dump the contents of unlogged tables and sequences. This
536 option has no effect on whether or not the table and sequence
537 definitions (schema) are dumped; it only suppresses dumping the
538 table and sequence data. Data in unlogged tables and sequences is
539 always excluded when dumping from a standby server.
540
541 --on-conflict-do-nothing
542 Add ON CONFLICT DO NOTHING to INSERT commands. This option is not
543 valid unless --inserts, --column-inserts or --rows-per-insert is
544 also specified.
545
546 --quote-all-identifiers
547 Force quoting of all identifiers. This option is recommended when
548 dumping a database from a server whose PostgreSQL major version is
549 different from pg_dump's, or when the output is intended to be
550 loaded into a server of a different major version. By default,
551 pg_dump quotes only identifiers that are reserved words in its own
552 major version. This sometimes results in compatibility issues when
553 dealing with servers of other versions that may have slightly
554 different sets of reserved words. Using --quote-all-identifiers
555 prevents such issues, at the price of a harder-to-read dump script.
556
557 --rows-per-insert=nrows
558 Dump data as INSERT commands (rather than COPY). Controls the
559 maximum number of rows per INSERT command. The value specified must
560 be a number greater than zero. Any error during restoring will
561 cause only rows that are part of the problematic INSERT to be lost,
562 rather than the entire table contents.
563
564 --section=sectionname
565 Only dump the named section. The section name can be pre-data,
566 data, or post-data. This option can be specified more than once to
567 select multiple sections. The default is to dump all sections.
568
569 The data section contains actual table data, large-object contents,
570 and sequence values. Post-data items include definitions of
571 indexes, triggers, rules, and constraints other than validated
572 check constraints. Pre-data items include all other data definition
573 items.
574
575 --serializable-deferrable
576 Use a serializable transaction for the dump, to ensure that the
577 snapshot used is consistent with later database states; but do this
578 by waiting for a point in the transaction stream at which no
579 anomalies can be present, so that there isn't a risk of the dump
580 failing or causing other transactions to roll back with a
581 serialization_failure. See Chapter 13 for more information about
582 transaction isolation and concurrency control.
583
584 This option is not beneficial for a dump which is intended only for
585 disaster recovery. It could be useful for a dump used to load a
586 copy of the database for reporting or other read-only load sharing
587 while the original database continues to be updated. Without it the
588 dump may reflect a state which is not consistent with any serial
589 execution of the transactions eventually committed. For example, if
590 batch processing techniques are used, a batch may show as closed in
591 the dump without all of the items which are in the batch appearing.
592
593 This option will make no difference if there are no read-write
594 transactions active when pg_dump is started. If read-write
595 transactions are active, the start of the dump may be delayed for
596 an indeterminate length of time. Once running, performance with or
597 without the switch is the same.
598
599 --snapshot=snapshotname
600 Use the specified synchronized snapshot when making a dump of the
601 database (see Table 9.92 for more details).
602
603 This option is useful when needing to synchronize the dump with a
604 logical replication slot (see Chapter 49) or with a concurrent
605 session.
606
607 In the case of a parallel dump, the snapshot name defined by this
608 option is used rather than taking a new snapshot.
609
610 --strict-names
611 Require that each extension (-e/--extension), schema (-n/--schema)
612 and table (-t/--table) qualifier match at least one
613 extension/schema/table in the database to be dumped. Note that if
614 none of the extension/schema/table qualifiers find matches, pg_dump
615 will generate an error even without --strict-names.
616
617 This option has no effect on -N/--exclude-schema,
618 -T/--exclude-table, or --exclude-table-data. An exclude pattern
619 failing to match any objects is not considered an error.
620
621 --use-set-session-authorization
622 Output SQL-standard SET SESSION AUTHORIZATION commands instead of
623 ALTER OWNER commands to determine object ownership. This makes the
624 dump more standards-compatible, but depending on the history of the
625 objects in the dump, might not restore properly. Also, a dump using
626 SET SESSION AUTHORIZATION will certainly require superuser
627 privileges to restore correctly, whereas ALTER OWNER requires
628 lesser privileges.
629
630 -?
631 --help
632 Show help about pg_dump command line arguments, and exit.
633
634 The following command-line options control the database connection
635 parameters.
636
637 -d dbname
638 --dbname=dbname
639 Specifies the name of the database to connect to. This is
640 equivalent to specifying dbname as the first non-option argument on
641 the command line. The dbname can be a connection string. If so,
642 connection string parameters will override any conflicting command
643 line options.
644
645 -h host
646 --host=host
647 Specifies the host name of the machine on which the server is
648 running. If the value begins with a slash, it is used as the
649 directory for the Unix domain socket. The default is taken from the
650 PGHOST environment variable, if set, else a Unix domain socket
651 connection is attempted.
652
653 -p port
654 --port=port
655 Specifies the TCP port or local Unix domain socket file extension
656 on which the server is listening for connections. Defaults to the
657 PGPORT environment variable, if set, or a compiled-in default.
658
659 -U username
660 --username=username
661 User name to connect as.
662
663 -w
664 --no-password
665 Never issue a password prompt. If the server requires password
666 authentication and a password is not available by other means such
667 as a .pgpass file, the connection attempt will fail. This option
668 can be useful in batch jobs and scripts where no user is present to
669 enter a password.
670
671 -W
672 --password
673 Force pg_dump to prompt for a password before connecting to a
674 database.
675
676 This option is never essential, since pg_dump will automatically
677 prompt for a password if the server demands password
678 authentication. However, pg_dump will waste a connection attempt
679 finding out that the server wants a password. In some cases it is
680 worth typing -W to avoid the extra connection attempt.
681
682 --role=rolename
683 Specifies a role name to be used to create the dump. This option
684 causes pg_dump to issue a SET ROLE rolename command after
685 connecting to the database. It is useful when the authenticated
686 user (specified by -U) lacks privileges needed by pg_dump, but can
687 switch to a role with the required rights. Some installations have
688 a policy against logging in directly as a superuser, and use of
689 this option allows dumps to be made without violating the policy.
690
692 PGDATABASE
693 PGHOST
694 PGOPTIONS
695 PGPORT
696 PGUSER
697 Default connection parameters.
698
699 PG_COLOR
700 Specifies whether to use color in diagnostic messages. Possible
701 values are always, auto and never.
702
703 This utility, like most other PostgreSQL utilities, also uses the
704 environment variables supported by libpq (see Section 34.15).
705
707 pg_dump internally executes SELECT statements. If you have problems
708 running pg_dump, make sure you are able to select information from the
709 database using, for example, psql(1). Also, any default connection
710 settings and environment variables used by the libpq front-end library
711 will apply.
712
713 The database activity of pg_dump is normally collected by the
714 cumulative statistics system. If this is undesirable, you can set
715 parameter track_counts to false via PGOPTIONS or the ALTER USER
716 command.
717
719 If your database cluster has any local additions to the template1
720 database, be careful to restore the output of pg_dump into a truly
721 empty database; otherwise you are likely to get errors due to duplicate
722 definitions of the added objects. To make an empty database without any
723 local additions, copy from template0 not template1, for example:
724
725 CREATE DATABASE foo WITH TEMPLATE template0;
726
727 When a data-only dump is chosen and the option --disable-triggers is
728 used, pg_dump emits commands to disable triggers on user tables before
729 inserting the data, and then commands to re-enable them after the data
730 has been inserted. If the restore is stopped in the middle, the system
731 catalogs might be left in the wrong state.
732
733 The dump file produced by pg_dump does not contain the statistics used
734 by the optimizer to make query planning decisions. Therefore, it is
735 wise to run ANALYZE after restoring from a dump file to ensure optimal
736 performance; see Section 25.1.3 and Section 25.1.6 for more
737 information.
738
739 Because pg_dump is used to transfer data to newer versions of
740 PostgreSQL, the output of pg_dump can be expected to load into
741 PostgreSQL server versions newer than pg_dump's version. pg_dump can
742 also dump from PostgreSQL servers older than its own version.
743 (Currently, servers back to version 9.2 are supported.) However,
744 pg_dump cannot dump from PostgreSQL servers newer than its own major
745 version; it will refuse to even try, rather than risk making an invalid
746 dump. Also, it is not guaranteed that pg_dump's output can be loaded
747 into a server of an older major version — not even if the dump was
748 taken from a server of that version. Loading a dump file into an older
749 server may require manual editing of the dump file to remove syntax not
750 understood by the older server. Use of the --quote-all-identifiers
751 option is recommended in cross-version cases, as it can prevent
752 problems arising from varying reserved-word lists in different
753 PostgreSQL versions.
754
755 When dumping logical replication subscriptions, pg_dump will generate
756 CREATE SUBSCRIPTION commands that use the connect = false option, so
757 that restoring the subscription does not make remote connections for
758 creating a replication slot or for initial table copy. That way, the
759 dump can be restored without requiring network access to the remote
760 servers. It is then up to the user to reactivate the subscriptions in a
761 suitable way. If the involved hosts have changed, the connection
762 information might have to be changed. It might also be appropriate to
763 truncate the target tables before initiating a new full table copy. If
764 users intend to copy initial data during refresh they must create the
765 slot with two_phase = false. After the initial sync, the two_phase
766 option will be automatically enabled by the subscriber if the
767 subscription had been originally created with two_phase = true option.
768
770 To dump a database called mydb into an SQL-script file:
771
772 $ pg_dump mydb > db.sql
773
774 To reload such a script into a (freshly created) database named newdb:
775
776 $ psql -d newdb -f db.sql
777
778 To dump a database into a custom-format archive file:
779
780 $ pg_dump -Fc mydb > db.dump
781
782 To dump a database into a directory-format archive:
783
784 $ pg_dump -Fd mydb -f dumpdir
785
786 To dump a database into a directory-format archive in parallel with 5
787 worker jobs:
788
789 $ pg_dump -Fd mydb -j 5 -f dumpdir
790
791 To reload an archive file into a (freshly created) database named
792 newdb:
793
794 $ pg_restore -d newdb db.dump
795
796 To reload an archive file into the same database it was dumped from,
797 discarding the current contents of that database:
798
799 $ pg_restore -d postgres --clean --create db.dump
800
801 To dump a single table named mytab:
802
803 $ pg_dump -t mytab mydb > db.sql
804
805 To dump all tables whose names start with emp in the detroit schema,
806 except for the table named employee_log:
807
808 $ pg_dump -t 'detroit.emp*' -T detroit.employee_log mydb > db.sql
809
810 To dump all schemas whose names start with east or west and end in gsm,
811 excluding any schemas whose names contain the word test:
812
813 $ pg_dump -n 'east*gsm' -n 'west*gsm' -N '*test*' mydb > db.sql
814
815 The same, using regular expression notation to consolidate the
816 switches:
817
818 $ pg_dump -n '(east|west)*gsm' -N '*test*' mydb > db.sql
819
820 To dump all database objects except for tables whose names begin with
821 ts_:
822
823 $ pg_dump -T 'ts_*' mydb > db.sql
824
825 To specify an upper-case or mixed-case name in -t and related switches,
826 you need to double-quote the name; else it will be folded to lower case
827 (see Patterns below). But double quotes are special to the shell, so in
828 turn they must be quoted. Thus, to dump a single table with a
829 mixed-case name, you need something like
830
831 $ pg_dump -t "\"MixedCaseName\"" mydb > mytab.sql
832
834 pg_dumpall(1), pg_restore(1), psql(1)
835
836
837
838PostgreSQL 15.4 2023 PG_DUMP(1)