1PG_DUMP(1) PostgreSQL 16.1 Documentation PG_DUMP(1)
2
3
4
6 pg_dump - extract a PostgreSQL database into a script file or other
7 archive file
8
10 pg_dump [connection-option...] [option...] [dbname]
11
13 pg_dump is a utility for backing up a PostgreSQL database. It makes
14 consistent backups even if the database is being used concurrently.
15 pg_dump does not block other users accessing the database (readers or
16 writers).
17
18 pg_dump only dumps a single database. To back up an entire cluster, or
19 to back up global objects that are common to all databases in a cluster
20 (such as roles and tablespaces), use pg_dumpall(1).
21
22 Dumps can be output in script or archive file formats. Script dumps are
23 plain-text files containing the SQL commands required to reconstruct
24 the database to the state it was in at the time it was saved. To
25 restore from such a script, feed it to psql(1). Script files can be
26 used to reconstruct the database even on other machines and other
27 architectures; with some modifications, even on other SQL database
28 products.
29
30 The alternative archive file formats must be used with pg_restore(1) to
31 rebuild the database. They allow pg_restore to be selective about what
32 is restored, or even to reorder the items prior to being restored. The
33 archive file formats are designed to be portable across architectures.
34
35 When used with one of the archive file formats and combined with
36 pg_restore, pg_dump provides a flexible archival and transfer
37 mechanism. pg_dump can be used to backup an entire database, then
38 pg_restore can be used to examine the archive and/or select which parts
39 of the database are to be restored. The most flexible output file
40 formats are the “custom” format (-Fc) and the “directory” format (-Fd).
41 They allow for selection and reordering of all archived items, support
42 parallel restoration, and are compressed by default. The “directory”
43 format is the only format that supports parallel dumps.
44
45 While running pg_dump, one should examine the output for any warnings
46 (printed on standard error), especially in light of the limitations
47 listed below.
48
50 The following command-line options control the content and format of
51 the output.
52
53 dbname
54 Specifies the name of the database to be dumped. If this is not
55 specified, the environment variable PGDATABASE is used. If that is
56 not set, the user name specified for the connection is used.
57
58 -a
59 --data-only
60 Dump only the data, not the schema (data definitions). Table data,
61 large objects, and sequence values are dumped.
62
63 This option is similar to, but for historical reasons not identical
64 to, specifying --section=data.
65
66 -b
67 --large-objects
68 --blobs (deprecated)
69 Include large objects in the dump. This is the default behavior
70 except when --schema, --table, or --schema-only is specified. The
71 -b switch is therefore only useful to add large objects to dumps
72 where a specific schema or table has been requested. Note that
73 large objects are considered data and therefore will be included
74 when --data-only is used, but not when --schema-only is.
75
76 -B
77 --no-large-objects
78 --no-blobs (deprecated)
79 Exclude large objects in the dump.
80
81 When both -b and -B are given, the behavior is to output large
82 objects, when data is being dumped, see the -b documentation.
83
84 -c
85 --clean
86 Output commands to DROP all the dumped database objects prior to
87 outputting the commands for creating them. This option is useful
88 when the restore is to overwrite an existing database. If any of
89 the objects do not exist in the destination database, ignorable
90 error messages will be reported during restore, unless --if-exists
91 is also specified.
92
93 This option is ignored when emitting an archive (non-text) output
94 file. For the archive formats, you can specify the option when you
95 call pg_restore.
96
97 -C
98 --create
99 Begin the output with a command to create the database itself and
100 reconnect to the created database. (With a script of this form, it
101 doesn't matter which database in the destination installation you
102 connect to before running the script.) If --clean is also
103 specified, the script drops and recreates the target database
104 before reconnecting to it.
105
106 With --create, the output also includes the database's comment if
107 any, and any configuration variable settings that are specific to
108 this database, that is, any ALTER DATABASE ... SET ... and ALTER
109 ROLE ... IN DATABASE ... SET ... commands that mention this
110 database. Access privileges for the database itself are also
111 dumped, unless --no-acl is specified.
112
113 This option is ignored when emitting an archive (non-text) output
114 file. For the archive formats, you can specify the option when you
115 call pg_restore.
116
117 -e pattern
118 --extension=pattern
119 Dump only extensions matching pattern. When this option is not
120 specified, all non-system extensions in the target database will be
121 dumped. Multiple extensions can be selected by writing multiple -e
122 switches. The pattern parameter is interpreted as a pattern
123 according to the same rules used by psql's \d commands (see
124 Patterns), so multiple extensions can also be selected by writing
125 wildcard characters in the pattern. When using wildcards, be
126 careful to quote the pattern if needed to prevent the shell from
127 expanding the wildcards.
128
129 Any configuration relation registered by pg_extension_config_dump
130 is included in the dump if its extension is specified by
131 --extension.
132
133 Note
134 When -e is specified, pg_dump makes no attempt to dump any
135 other database objects that the selected extension(s) might
136 depend upon. Therefore, there is no guarantee that the results
137 of a specific-extension dump can be successfully restored by
138 themselves into a clean database.
139
140 -E encoding
141 --encoding=encoding
142 Create the dump in the specified character set encoding. By
143 default, the dump is created in the database encoding. (Another way
144 to get the same result is to set the PGCLIENTENCODING environment
145 variable to the desired dump encoding.) The supported encodings are
146 described in Section 24.3.1.
147
148 -f file
149 --file=file
150 Send output to the specified file. This parameter can be omitted
151 for file based output formats, in which case the standard output is
152 used. It must be given for the directory output format however,
153 where it specifies the target directory instead of a file. In this
154 case the directory is created by pg_dump and must not exist before.
155
156 -F format
157 --format=format
158 Selects the format of the output. format can be one of the
159 following:
160
161 p
162 plain
163 Output a plain-text SQL script file (the default).
164
165 c
166 custom
167 Output a custom-format archive suitable for input into
168 pg_restore. Together with the directory output format, this is
169 the most flexible output format in that it allows manual
170 selection and reordering of archived items during restore. This
171 format is also compressed by default.
172
173 d
174 directory
175 Output a directory-format archive suitable for input into
176 pg_restore. This will create a directory with one file for each
177 table and large object being dumped, plus a so-called Table of
178 Contents file describing the dumped objects in a
179 machine-readable format that pg_restore can read. A directory
180 format archive can be manipulated with standard Unix tools; for
181 example, files in an uncompressed archive can be compressed
182 with the gzip, lz4, or zstd tools. This format is compressed by
183 default using gzip and also supports parallel dumps.
184
185 t
186 tar
187 Output a tar-format archive suitable for input into pg_restore.
188 The tar format is compatible with the directory format:
189 extracting a tar-format archive produces a valid
190 directory-format archive. However, the tar format does not
191 support compression. Also, when using tar format the relative
192 order of table data items cannot be changed during restore.
193
194 -j njobs
195 --jobs=njobs
196 Run the dump in parallel by dumping njobs tables simultaneously.
197 This option may reduce the time needed to perform the dump but it
198 also increases the load on the database server. You can only use
199 this option with the directory output format because this is the
200 only output format where multiple processes can write their data at
201 the same time.
202
203 pg_dump will open njobs + 1 connections to the database, so make
204 sure your max_connections setting is high enough to accommodate all
205 connections.
206
207 Requesting exclusive locks on database objects while running a
208 parallel dump could cause the dump to fail. The reason is that the
209 pg_dump leader process requests shared locks (ACCESS SHARE) on the
210 objects that the worker processes are going to dump later in order
211 to make sure that nobody deletes them and makes them go away while
212 the dump is running. If another client then requests an exclusive
213 lock on a table, that lock will not be granted but will be queued
214 waiting for the shared lock of the leader process to be released.
215 Consequently any other access to the table will not be granted
216 either and will queue after the exclusive lock request. This
217 includes the worker process trying to dump the table. Without any
218 precautions this would be a classic deadlock situation. To detect
219 this conflict, the pg_dump worker process requests another shared
220 lock using the NOWAIT option. If the worker process is not granted
221 this shared lock, somebody else must have requested an exclusive
222 lock in the meantime and there is no way to continue with the dump,
223 so pg_dump has no choice but to abort the dump.
224
225 To perform a parallel dump, the database server needs to support
226 synchronized snapshots, a feature that was introduced in PostgreSQL
227 9.2 for primary servers and 10 for standbys. With this feature,
228 database clients can ensure they see the same data set even though
229 they use different connections. pg_dump -j uses multiple database
230 connections; it connects to the database once with the leader
231 process and once again for each worker job. Without the
232 synchronized snapshot feature, the different worker jobs wouldn't
233 be guaranteed to see the same data in each connection, which could
234 lead to an inconsistent backup.
235
236 -n pattern
237 --schema=pattern
238 Dump only schemas matching pattern; this selects both the schema
239 itself, and all its contained objects. When this option is not
240 specified, all non-system schemas in the target database will be
241 dumped. Multiple schemas can be selected by writing multiple -n
242 switches. The pattern parameter is interpreted as a pattern
243 according to the same rules used by psql's \d commands (see
244 Patterns), so multiple schemas can also be selected by writing
245 wildcard characters in the pattern. When using wildcards, be
246 careful to quote the pattern if needed to prevent the shell from
247 expanding the wildcards; see Examples below.
248
249 Note
250 When -n is specified, pg_dump makes no attempt to dump any
251 other database objects that the selected schema(s) might depend
252 upon. Therefore, there is no guarantee that the results of a
253 specific-schema dump can be successfully restored by themselves
254 into a clean database.
255
256 Note
257 Non-schema objects such as large objects are not dumped when -n
258 is specified. You can add large objects back to the dump with
259 the --large-objects switch.
260
261 -N pattern
262 --exclude-schema=pattern
263 Do not dump any schemas matching pattern. The pattern is
264 interpreted according to the same rules as for -n. -N can be given
265 more than once to exclude schemas matching any of several patterns.
266
267 When both -n and -N are given, the behavior is to dump just the
268 schemas that match at least one -n switch but no -N switches. If -N
269 appears without -n, then schemas matching -N are excluded from what
270 is otherwise a normal dump.
271
272 -O
273 --no-owner
274 Do not output commands to set ownership of objects to match the
275 original database. By default, pg_dump issues ALTER OWNER or SET
276 SESSION AUTHORIZATION statements to set ownership of created
277 database objects. These statements will fail when the script is run
278 unless it is started by a superuser (or the same user that owns all
279 of the objects in the script). To make a script that can be
280 restored by any user, but will give that user ownership of all the
281 objects, specify -O.
282
283 This option is ignored when emitting an archive (non-text) output
284 file. For the archive formats, you can specify the option when you
285 call pg_restore.
286
287 -R
288 --no-reconnect
289 This option is obsolete but still accepted for backwards
290 compatibility.
291
292 -s
293 --schema-only
294 Dump only the object definitions (schema), not data.
295
296 This option is the inverse of --data-only. It is similar to, but
297 for historical reasons not identical to, specifying
298 --section=pre-data --section=post-data.
299
300 (Do not confuse this with the --schema option, which uses the word
301 “schema” in a different meaning.)
302
303 To exclude table data for only a subset of tables in the database,
304 see --exclude-table-data.
305
306 -S username
307 --superuser=username
308 Specify the superuser user name to use when disabling triggers.
309 This is relevant only if --disable-triggers is used. (Usually, it's
310 better to leave this out, and instead start the resulting script as
311 superuser.)
312
313 -t pattern
314 --table=pattern
315 Dump only tables with names matching pattern. Multiple tables can
316 be selected by writing multiple -t switches. The pattern parameter
317 is interpreted as a pattern according to the same rules used by
318 psql's \d commands (see Patterns), so multiple tables can also be
319 selected by writing wildcard characters in the pattern. When using
320 wildcards, be careful to quote the pattern if needed to prevent the
321 shell from expanding the wildcards; see Examples below.
322
323 As well as tables, this option can be used to dump the definition
324 of matching views, materialized views, foreign tables, and
325 sequences. It will not dump the contents of views or materialized
326 views, and the contents of foreign tables will only be dumped if
327 the corresponding foreign server is specified with
328 --include-foreign-data.
329
330 The -n and -N switches have no effect when -t is used, because
331 tables selected by -t will be dumped regardless of those switches,
332 and non-table objects will not be dumped.
333
334 Note
335 When -t is specified, pg_dump makes no attempt to dump any
336 other database objects that the selected table(s) might depend
337 upon. Therefore, there is no guarantee that the results of a
338 specific-table dump can be successfully restored by themselves
339 into a clean database.
340
341 -T pattern
342 --exclude-table=pattern
343 Do not dump any tables matching pattern. The pattern is interpreted
344 according to the same rules as for -t. -T can be given more than
345 once to exclude tables matching any of several patterns.
346
347 When both -t and -T are given, the behavior is to dump just the
348 tables that match at least one -t switch but no -T switches. If -T
349 appears without -t, then tables matching -T are excluded from what
350 is otherwise a normal dump.
351
352 -v
353 --verbose
354 Specifies verbose mode. This will cause pg_dump to output detailed
355 object comments and start/stop times to the dump file, and progress
356 messages to standard error. Repeating the option causes additional
357 debug-level messages to appear on standard error.
358
359 -V
360 --version
361 Print the pg_dump version and exit.
362
363 -x
364 --no-privileges
365 --no-acl
366 Prevent dumping of access privileges (grant/revoke commands).
367
368 -Z level
369 -Z method[:detail]
370 --compress=level
371 --compress=method[:detail]
372 Specify the compression method and/or the compression level to use.
373 The compression method can be set to gzip, lz4, zstd, or none for
374 no compression. A compression detail string can optionally be
375 specified. If the detail string is an integer, it specifies the
376 compression level. Otherwise, it should be a comma-separated list
377 of items, each of the form keyword or keyword=value. Currently, the
378 supported keywords are level and long.
379
380 If no compression level is specified, the default compression level
381 will be used. If only a level is specified without mentioning an
382 algorithm, gzip compression will be used if the level is greater
383 than 0, and no compression will be used if the level is 0.
384
385 For the custom and directory archive formats, this specifies
386 compression of individual table-data segments, and the default is
387 to compress using gzip at a moderate level. For plain text output,
388 setting a nonzero compression level causes the entire output file
389 to be compressed, as though it had been fed through gzip, lz4, or
390 zstd; but the default is not to compress. With zstd compression,
391 long mode may improve the compression ratio, at the cost of
392 increased memory use.
393
394 The tar archive format currently does not support compression at
395 all.
396
397 --binary-upgrade
398 This option is for use by in-place upgrade utilities. Its use for
399 other purposes is not recommended or supported. The behavior of the
400 option may change in future releases without notice.
401
402 --column-inserts
403 --attribute-inserts
404 Dump data as INSERT commands with explicit column names (INSERT
405 INTO table (column, ...) VALUES ...). This will make restoration
406 very slow; it is mainly useful for making dumps that can be loaded
407 into non-PostgreSQL databases. Any error during restoring will
408 cause only rows that are part of the problematic INSERT to be lost,
409 rather than the entire table contents.
410
411 --disable-dollar-quoting
412 This option disables the use of dollar quoting for function bodies,
413 and forces them to be quoted using SQL standard string syntax.
414
415 --disable-triggers
416 This option is relevant only when creating a data-only dump. It
417 instructs pg_dump to include commands to temporarily disable
418 triggers on the target tables while the data is restored. Use this
419 if you have referential integrity checks or other triggers on the
420 tables that you do not want to invoke during data restore.
421
422 Presently, the commands emitted for --disable-triggers must be done
423 as superuser. So, you should also specify a superuser name with -S,
424 or preferably be careful to start the resulting script as a
425 superuser.
426
427 This option is ignored when emitting an archive (non-text) output
428 file. For the archive formats, you can specify the option when you
429 call pg_restore.
430
431 --enable-row-security
432 This option is relevant only when dumping the contents of a table
433 which has row security. By default, pg_dump will set row_security
434 to off, to ensure that all data is dumped from the table. If the
435 user does not have sufficient privileges to bypass row security,
436 then an error is thrown. This parameter instructs pg_dump to set
437 row_security to on instead, allowing the user to dump the parts of
438 the contents of the table that they have access to.
439
440 Note that if you use this option currently, you probably also want
441 the dump be in INSERT format, as the COPY FROM during restore does
442 not support row security.
443
444 --exclude-table-and-children=pattern
445 This is the same as the -T/--exclude-table option, except that it
446 also excludes any partitions or inheritance child tables of the
447 table(s) matching the pattern.
448
449 --exclude-table-data=pattern
450 Do not dump data for any tables matching pattern. The pattern is
451 interpreted according to the same rules as for -t.
452 --exclude-table-data can be given more than once to exclude tables
453 matching any of several patterns. This option is useful when you
454 need the definition of a particular table even though you do not
455 need the data in it.
456
457 To exclude data for all tables in the database, see --schema-only.
458
459 --exclude-table-data-and-children=pattern
460 This is the same as the --exclude-table-data option, except that it
461 also excludes data of any partitions or inheritance child tables of
462 the table(s) matching the pattern.
463
464 --extra-float-digits=ndigits
465 Use the specified value of extra_float_digits when dumping
466 floating-point data, instead of the maximum available precision.
467 Routine dumps made for backup purposes should not use this option.
468
469 --if-exists
470 Use DROP ... IF EXISTS commands to drop objects in --clean mode.
471 This suppresses “does not exist” errors that might otherwise be
472 reported. This option is not valid unless --clean is also
473 specified.
474
475 --include-foreign-data=foreignserver
476 Dump the data for any foreign table with a foreign server matching
477 foreignserver pattern. Multiple foreign servers can be selected by
478 writing multiple --include-foreign-data switches. Also, the
479 foreignserver parameter is interpreted as a pattern according to
480 the same rules used by psql's \d commands (see Patterns), so
481 multiple foreign servers can also be selected by writing wildcard
482 characters in the pattern. When using wildcards, be careful to
483 quote the pattern if needed to prevent the shell from expanding the
484 wildcards; see Examples below. The only exception is that an empty
485 pattern is disallowed.
486
487 Note
488 When --include-foreign-data is specified, pg_dump does not
489 check that the foreign table is writable. Therefore, there is
490 no guarantee that the results of a foreign table dump can be
491 successfully restored.
492
493 --inserts
494 Dump data as INSERT commands (rather than COPY). This will make
495 restoration very slow; it is mainly useful for making dumps that
496 can be loaded into non-PostgreSQL databases. Any error during
497 restoring will cause only rows that are part of the problematic
498 INSERT to be lost, rather than the entire table contents. Note that
499 the restore might fail altogether if you have rearranged column
500 order. The --column-inserts option is safe against column order
501 changes, though even slower.
502
503 --load-via-partition-root
504 When dumping data for a table partition, make the COPY or INSERT
505 statements target the root of the partitioning hierarchy that
506 contains it, rather than the partition itself. This causes the
507 appropriate partition to be re-determined for each row when the
508 data is loaded. This may be useful when restoring data on a server
509 where rows do not always fall into the same partitions as they did
510 on the original server. That could happen, for example, if the
511 partitioning column is of type text and the two systems have
512 different definitions of the collation used to sort the
513 partitioning column.
514
515 --lock-wait-timeout=timeout
516 Do not wait forever to acquire shared table locks at the beginning
517 of the dump. Instead fail if unable to lock a table within the
518 specified timeout. The timeout may be specified in any of the
519 formats accepted by SET statement_timeout. (Allowed formats vary
520 depending on the server version you are dumping from, but an
521 integer number of milliseconds is accepted by all versions.)
522
523 --no-comments
524 Do not dump comments.
525
526 --no-publications
527 Do not dump publications.
528
529 --no-security-labels
530 Do not dump security labels.
531
532 --no-subscriptions
533 Do not dump subscriptions.
534
535 --no-sync
536 By default, pg_dump will wait for all files to be written safely to
537 disk. This option causes pg_dump to return without waiting, which
538 is faster, but means that a subsequent operating system crash can
539 leave the dump corrupt. Generally, this option is useful for
540 testing but should not be used when dumping data from production
541 installation.
542
543 --no-table-access-method
544 Do not output commands to select table access methods. With this
545 option, all objects will be created with whichever table access
546 method is the default during restore.
547
548 This option is ignored when emitting an archive (non-text) output
549 file. For the archive formats, you can specify the option when you
550 call pg_restore.
551
552 --no-tablespaces
553 Do not output commands to select tablespaces. With this option, all
554 objects will be created in whichever tablespace is the default
555 during restore.
556
557 This option is ignored when emitting an archive (non-text) output
558 file. For the archive formats, you can specify the option when you
559 call pg_restore.
560
561 --no-toast-compression
562 Do not output commands to set TOAST compression methods. With this
563 option, all columns will be restored with the default compression
564 setting.
565
566 --no-unlogged-table-data
567 Do not dump the contents of unlogged tables and sequences. This
568 option has no effect on whether or not the table and sequence
569 definitions (schema) are dumped; it only suppresses dumping the
570 table and sequence data. Data in unlogged tables and sequences is
571 always excluded when dumping from a standby server.
572
573 --on-conflict-do-nothing
574 Add ON CONFLICT DO NOTHING to INSERT commands. This option is not
575 valid unless --inserts, --column-inserts or --rows-per-insert is
576 also specified.
577
578 --quote-all-identifiers
579 Force quoting of all identifiers. This option is recommended when
580 dumping a database from a server whose PostgreSQL major version is
581 different from pg_dump's, or when the output is intended to be
582 loaded into a server of a different major version. By default,
583 pg_dump quotes only identifiers that are reserved words in its own
584 major version. This sometimes results in compatibility issues when
585 dealing with servers of other versions that may have slightly
586 different sets of reserved words. Using --quote-all-identifiers
587 prevents such issues, at the price of a harder-to-read dump script.
588
589 --rows-per-insert=nrows
590 Dump data as INSERT commands (rather than COPY). Controls the
591 maximum number of rows per INSERT command. The value specified must
592 be a number greater than zero. Any error during restoring will
593 cause only rows that are part of the problematic INSERT to be lost,
594 rather than the entire table contents.
595
596 --section=sectionname
597 Only dump the named section. The section name can be pre-data,
598 data, or post-data. This option can be specified more than once to
599 select multiple sections. The default is to dump all sections.
600
601 The data section contains actual table data, large-object contents,
602 and sequence values. Post-data items include definitions of
603 indexes, triggers, rules, and constraints other than validated
604 check constraints. Pre-data items include all other data definition
605 items.
606
607 --serializable-deferrable
608 Use a serializable transaction for the dump, to ensure that the
609 snapshot used is consistent with later database states; but do this
610 by waiting for a point in the transaction stream at which no
611 anomalies can be present, so that there isn't a risk of the dump
612 failing or causing other transactions to roll back with a
613 serialization_failure. See Chapter 13 for more information about
614 transaction isolation and concurrency control.
615
616 This option is not beneficial for a dump which is intended only for
617 disaster recovery. It could be useful for a dump used to load a
618 copy of the database for reporting or other read-only load sharing
619 while the original database continues to be updated. Without it the
620 dump may reflect a state which is not consistent with any serial
621 execution of the transactions eventually committed. For example, if
622 batch processing techniques are used, a batch may show as closed in
623 the dump without all of the items which are in the batch appearing.
624
625 This option will make no difference if there are no read-write
626 transactions active when pg_dump is started. If read-write
627 transactions are active, the start of the dump may be delayed for
628 an indeterminate length of time. Once running, performance with or
629 without the switch is the same.
630
631 --snapshot=snapshotname
632 Use the specified synchronized snapshot when making a dump of the
633 database (see Table 9.94 for more details).
634
635 This option is useful when needing to synchronize the dump with a
636 logical replication slot (see Chapter 49) or with a concurrent
637 session.
638
639 In the case of a parallel dump, the snapshot name defined by this
640 option is used rather than taking a new snapshot.
641
642 --strict-names
643 Require that each extension (-e/--extension), schema (-n/--schema)
644 and table (-t/--table) pattern match at least one
645 extension/schema/table in the database to be dumped. Note that if
646 none of the extension/schema/table patterns find matches, pg_dump
647 will generate an error even without --strict-names.
648
649 This option has no effect on -N/--exclude-schema,
650 -T/--exclude-table, or --exclude-table-data. An exclude pattern
651 failing to match any objects is not considered an error.
652
653 --table-and-children=pattern
654 This is the same as the -t/--table option, except that it also
655 includes any partitions or inheritance child tables of the table(s)
656 matching the pattern.
657
658 --use-set-session-authorization
659 Output SQL-standard SET SESSION AUTHORIZATION commands instead of
660 ALTER OWNER commands to determine object ownership. This makes the
661 dump more standards-compatible, but depending on the history of the
662 objects in the dump, might not restore properly. Also, a dump using
663 SET SESSION AUTHORIZATION will certainly require superuser
664 privileges to restore correctly, whereas ALTER OWNER requires
665 lesser privileges.
666
667 -?
668 --help
669 Show help about pg_dump command line arguments, and exit.
670
671 The following command-line options control the database connection
672 parameters.
673
674 -d dbname
675 --dbname=dbname
676 Specifies the name of the database to connect to. This is
677 equivalent to specifying dbname as the first non-option argument on
678 the command line. The dbname can be a connection string. If so,
679 connection string parameters will override any conflicting command
680 line options.
681
682 -h host
683 --host=host
684 Specifies the host name of the machine on which the server is
685 running. If the value begins with a slash, it is used as the
686 directory for the Unix domain socket. The default is taken from the
687 PGHOST environment variable, if set, else a Unix domain socket
688 connection is attempted.
689
690 -p port
691 --port=port
692 Specifies the TCP port or local Unix domain socket file extension
693 on which the server is listening for connections. Defaults to the
694 PGPORT environment variable, if set, or a compiled-in default.
695
696 -U username
697 --username=username
698 User name to connect as.
699
700 -w
701 --no-password
702 Never issue a password prompt. If the server requires password
703 authentication and a password is not available by other means such
704 as a .pgpass file, the connection attempt will fail. This option
705 can be useful in batch jobs and scripts where no user is present to
706 enter a password.
707
708 -W
709 --password
710 Force pg_dump to prompt for a password before connecting to a
711 database.
712
713 This option is never essential, since pg_dump will automatically
714 prompt for a password if the server demands password
715 authentication. However, pg_dump will waste a connection attempt
716 finding out that the server wants a password. In some cases it is
717 worth typing -W to avoid the extra connection attempt.
718
719 --role=rolename
720 Specifies a role name to be used to create the dump. This option
721 causes pg_dump to issue a SET ROLE rolename command after
722 connecting to the database. It is useful when the authenticated
723 user (specified by -U) lacks privileges needed by pg_dump, but can
724 switch to a role with the required rights. Some installations have
725 a policy against logging in directly as a superuser, and use of
726 this option allows dumps to be made without violating the policy.
727
729 PGDATABASE
730 PGHOST
731 PGOPTIONS
732 PGPORT
733 PGUSER
734 Default connection parameters.
735
736 PG_COLOR
737 Specifies whether to use color in diagnostic messages. Possible
738 values are always, auto and never.
739
740 This utility, like most other PostgreSQL utilities, also uses the
741 environment variables supported by libpq (see Section 34.15).
742
744 pg_dump internally executes SELECT statements. If you have problems
745 running pg_dump, make sure you are able to select information from the
746 database using, for example, psql(1). Also, any default connection
747 settings and environment variables used by the libpq front-end library
748 will apply.
749
750 The database activity of pg_dump is normally collected by the
751 cumulative statistics system. If this is undesirable, you can set
752 parameter track_counts to false via PGOPTIONS or the ALTER USER
753 command.
754
756 If your database cluster has any local additions to the template1
757 database, be careful to restore the output of pg_dump into a truly
758 empty database; otherwise you are likely to get errors due to duplicate
759 definitions of the added objects. To make an empty database without any
760 local additions, copy from template0 not template1, for example:
761
762 CREATE DATABASE foo WITH TEMPLATE template0;
763
764 When a data-only dump is chosen and the option --disable-triggers is
765 used, pg_dump emits commands to disable triggers on user tables before
766 inserting the data, and then commands to re-enable them after the data
767 has been inserted. If the restore is stopped in the middle, the system
768 catalogs might be left in the wrong state.
769
770 The dump file produced by pg_dump does not contain the statistics used
771 by the optimizer to make query planning decisions. Therefore, it is
772 wise to run ANALYZE after restoring from a dump file to ensure optimal
773 performance; see Section 25.1.3 and Section 25.1.6 for more
774 information.
775
776 Because pg_dump is used to transfer data to newer versions of
777 PostgreSQL, the output of pg_dump can be expected to load into
778 PostgreSQL server versions newer than pg_dump's version. pg_dump can
779 also dump from PostgreSQL servers older than its own version.
780 (Currently, servers back to version 9.2 are supported.) However,
781 pg_dump cannot dump from PostgreSQL servers newer than its own major
782 version; it will refuse to even try, rather than risk making an invalid
783 dump. Also, it is not guaranteed that pg_dump's output can be loaded
784 into a server of an older major version — not even if the dump was
785 taken from a server of that version. Loading a dump file into an older
786 server may require manual editing of the dump file to remove syntax not
787 understood by the older server. Use of the --quote-all-identifiers
788 option is recommended in cross-version cases, as it can prevent
789 problems arising from varying reserved-word lists in different
790 PostgreSQL versions.
791
792 When dumping logical replication subscriptions, pg_dump will generate
793 CREATE SUBSCRIPTION commands that use the connect = false option, so
794 that restoring the subscription does not make remote connections for
795 creating a replication slot or for initial table copy. That way, the
796 dump can be restored without requiring network access to the remote
797 servers. It is then up to the user to reactivate the subscriptions in a
798 suitable way. If the involved hosts have changed, the connection
799 information might have to be changed. It might also be appropriate to
800 truncate the target tables before initiating a new full table copy. If
801 users intend to copy initial data during refresh they must create the
802 slot with two_phase = false. After the initial sync, the two_phase
803 option will be automatically enabled by the subscriber if the
804 subscription had been originally created with two_phase = true option.
805
807 To dump a database called mydb into an SQL-script file:
808
809 $ pg_dump mydb > db.sql
810
811 To reload such a script into a (freshly created) database named newdb:
812
813 $ psql -d newdb -f db.sql
814
815 To dump a database into a custom-format archive file:
816
817 $ pg_dump -Fc mydb > db.dump
818
819 To dump a database into a directory-format archive:
820
821 $ pg_dump -Fd mydb -f dumpdir
822
823 To dump a database into a directory-format archive in parallel with 5
824 worker jobs:
825
826 $ pg_dump -Fd mydb -j 5 -f dumpdir
827
828 To reload an archive file into a (freshly created) database named
829 newdb:
830
831 $ pg_restore -d newdb db.dump
832
833 To reload an archive file into the same database it was dumped from,
834 discarding the current contents of that database:
835
836 $ pg_restore -d postgres --clean --create db.dump
837
838 To dump a single table named mytab:
839
840 $ pg_dump -t mytab mydb > db.sql
841
842 To dump all tables whose names start with emp in the detroit schema,
843 except for the table named employee_log:
844
845 $ pg_dump -t 'detroit.emp*' -T detroit.employee_log mydb > db.sql
846
847 To dump all schemas whose names start with east or west and end in gsm,
848 excluding any schemas whose names contain the word test:
849
850 $ pg_dump -n 'east*gsm' -n 'west*gsm' -N '*test*' mydb > db.sql
851
852 The same, using regular expression notation to consolidate the
853 switches:
854
855 $ pg_dump -n '(east|west)*gsm' -N '*test*' mydb > db.sql
856
857 To dump all database objects except for tables whose names begin with
858 ts_:
859
860 $ pg_dump -T 'ts_*' mydb > db.sql
861
862 To specify an upper-case or mixed-case name in -t and related switches,
863 you need to double-quote the name; else it will be folded to lower case
864 (see Patterns). But double quotes are special to the shell, so in turn
865 they must be quoted. Thus, to dump a single table with a mixed-case
866 name, you need something like
867
868 $ pg_dump -t "\"MixedCaseName\"" mydb > mytab.sql
869
871 pg_dumpall(1), pg_restore(1), psql(1)
872
873
874
875PostgreSQL 16.1 2023 PG_DUMP(1)