1PG_DUMP(1)               PostgreSQL 10.7 Documentation              PG_DUMP(1)
2
3
4

NAME

6       pg_dump - extract a PostgreSQL database into a script file or other
7       archive file
8

SYNOPSIS

10       pg_dump [connection-option...] [option...] [dbname]
11

DESCRIPTION

13       pg_dump is a utility for backing up a PostgreSQL database. It makes
14       consistent backups even if the database is being used concurrently.
15       pg_dump does not block other users accessing the database (readers or
16       writers).
17
18       pg_dump only dumps a single database. To backup global objects that are
19       common to all databases in a cluster, such as roles and tablespaces,
20       use pg_dumpall(1).
21
22       Dumps can be output in script or archive file formats. Script dumps are
23       plain-text files containing the SQL commands required to reconstruct
24       the database to the state it was in at the time it was saved. To
25       restore from such a script, feed it to psql(1). Script files can be
26       used to reconstruct the database even on other machines and other
27       architectures; with some modifications, even on other SQL database
28       products.
29
30       The alternative archive file formats must be used with pg_restore(1) to
31       rebuild the database. They allow pg_restore to be selective about what
32       is restored, or even to reorder the items prior to being restored. The
33       archive file formats are designed to be portable across architectures.
34
35       When used with one of the archive file formats and combined with
36       pg_restore, pg_dump provides a flexible archival and transfer
37       mechanism.  pg_dump can be used to backup an entire database, then
38       pg_restore can be used to examine the archive and/or select which parts
39       of the database are to be restored. The most flexible output file
40       formats are the “custom” format (-Fc) and the “directory” format (-Fd).
41       They allow for selection and reordering of all archived items, support
42       parallel restoration, and are compressed by default. The “directory”
43       format is the only format that supports parallel dumps.
44
45       While running pg_dump, one should examine the output for any warnings
46       (printed on standard error), especially in light of the limitations
47       listed below.
48

OPTIONS

50       The following command-line options control the content and format of
51       the output.
52
53       dbname
54           Specifies the name of the database to be dumped. If this is not
55           specified, the environment variable PGDATABASE is used. If that is
56           not set, the user name specified for the connection is used.
57
58       -a
59       --data-only
60           Dump only the data, not the schema (data definitions). Table data,
61           large objects, and sequence values are dumped.
62
63           This option is similar to, but for historical reasons not identical
64           to, specifying --section=data.
65
66       -b
67       --blobs
68           Include large objects in the dump. This is the default behavior
69           except when --schema, --table, or --schema-only is specified. The
70           -b switch is therefore only useful to add large objects to dumps
71           where a specific schema or table has been requested. Note that
72           blobs are considered data and therefore will be included when
73           --data-only is used, but not when --schema-only is.
74
75       -B
76       --no-blobs
77           Exclude large objects in the dump.
78
79           When both -b and -B are given, the behavior is to output large
80           objects, when data is being dumped, see the -b documentation.
81
82       -c
83       --clean
84           Output commands to clean (drop) database objects prior to
85           outputting the commands for creating them. (Unless --if-exists is
86           also specified, restore might generate some harmless error
87           messages, if any objects were not present in the destination
88           database.)
89
90           This option is only meaningful for the plain-text format. For the
91           archive formats, you can specify the option when you call
92           pg_restore.
93
94       -C
95       --create
96           Begin the output with a command to create the database itself and
97           reconnect to the created database. (With a script of this form, it
98           doesn't matter which database in the destination installation you
99           connect to before running the script.) If --clean is also
100           specified, the script drops and recreates the target database
101           before reconnecting to it.
102
103           This option is only meaningful for the plain-text format. For the
104           archive formats, you can specify the option when you call
105           pg_restore.
106
107       -E encoding
108       --encoding=encoding
109           Create the dump in the specified character set encoding. By
110           default, the dump is created in the database encoding. (Another way
111           to get the same result is to set the PGCLIENTENCODING environment
112           variable to the desired dump encoding.)
113
114       -f file
115       --file=file
116           Send output to the specified file. This parameter can be omitted
117           for file based output formats, in which case the standard output is
118           used. It must be given for the directory output format however,
119           where it specifies the target directory instead of a file. In this
120           case the directory is created by pg_dump and must not exist before.
121
122       -F format
123       --format=format
124           Selects the format of the output.  format can be one of the
125           following:
126
127           p
128           plain
129               Output a plain-text SQL script file (the default).
130
131           c
132           custom
133               Output a custom-format archive suitable for input into
134               pg_restore. Together with the directory output format, this is
135               the most flexible output format in that it allows manual
136               selection and reordering of archived items during restore. This
137               format is also compressed by default.
138
139           d
140           directory
141               Output a directory-format archive suitable for input into
142               pg_restore. This will create a directory with one file for each
143               table and blob being dumped, plus a so-called Table of Contents
144               file describing the dumped objects in a machine-readable format
145               that pg_restore can read. A directory format archive can be
146               manipulated with standard Unix tools; for example, files in an
147               uncompressed archive can be compressed with the gzip tool. This
148               format is compressed by default and also supports parallel
149               dumps.
150
151           t
152           tar
153               Output a tar-format archive suitable for input into pg_restore.
154               The tar format is compatible with the directory format:
155               extracting a tar-format archive produces a valid
156               directory-format archive. However, the tar format does not
157               support compression. Also, when using tar format the relative
158               order of table data items cannot be changed during restore.
159
160       -j njobs
161       --jobs=njobs
162           Run the dump in parallel by dumping njobs tables simultaneously.
163           This option reduces the time of the dump but it also increases the
164           load on the database server. You can only use this option with the
165           directory output format because this is the only output format
166           where multiple processes can write their data at the same time.
167
168           pg_dump will open njobs + 1 connections to the database, so make
169           sure your max_connections setting is high enough to accommodate all
170           connections.
171
172           Requesting exclusive locks on database objects while running a
173           parallel dump could cause the dump to fail. The reason is that the
174           pg_dump master process requests shared locks on the objects that
175           the worker processes are going to dump later in order to make sure
176           that nobody deletes them and makes them go away while the dump is
177           running. If another client then requests an exclusive lock on a
178           table, that lock will not be granted but will be queued waiting for
179           the shared lock of the master process to be released. Consequently
180           any other access to the table will not be granted either and will
181           queue after the exclusive lock request. This includes the worker
182           process trying to dump the table. Without any precautions this
183           would be a classic deadlock situation. To detect this conflict, the
184           pg_dump worker process requests another shared lock using the
185           NOWAIT option. If the worker process is not granted this shared
186           lock, somebody else must have requested an exclusive lock in the
187           meantime and there is no way to continue with the dump, so pg_dump
188           has no choice but to abort the dump.
189
190           For a consistent backup, the database server needs to support
191           synchronized snapshots, a feature that was introduced in PostgreSQL
192           9.2 for primary servers and 10 for standbys. With this feature,
193           database clients can ensure they see the same data set even though
194           they use different connections.  pg_dump -j uses multiple database
195           connections; it connects to the database once with the master
196           process and once again for each worker job. Without the
197           synchronized snapshot feature, the different worker jobs wouldn't
198           be guaranteed to see the same data in each connection, which could
199           lead to an inconsistent backup.
200
201           If you want to run a parallel dump of a pre-9.2 server, you need to
202           make sure that the database content doesn't change from between the
203           time the master connects to the database until the last worker job
204           has connected to the database. The easiest way to do this is to
205           halt any data modifying processes (DDL and DML) accessing the
206           database before starting the backup. You also need to specify the
207           --no-synchronized-snapshots parameter when running pg_dump -j
208           against a pre-9.2 PostgreSQL server.
209
210       -n schema
211       --schema=schema
212           Dump only schemas matching schema; this selects both the schema
213           itself, and all its contained objects. When this option is not
214           specified, all non-system schemas in the target database will be
215           dumped. Multiple schemas can be selected by writing multiple -n
216           switches. Also, the schema parameter is interpreted as a pattern
217           according to the same rules used by psql's \d commands (see
218           Patterns), so multiple schemas can also be selected by writing
219           wildcard characters in the pattern. When using wildcards, be
220           careful to quote the pattern if needed to prevent the shell from
221           expanding the wildcards; see EXAMPLES.
222
223               Note
224               When -n is specified, pg_dump makes no attempt to dump any
225               other database objects that the selected schema(s) might depend
226               upon. Therefore, there is no guarantee that the results of a
227               specific-schema dump can be successfully restored by themselves
228               into a clean database.
229
230               Note
231               Non-schema objects such as blobs are not dumped when -n is
232               specified. You can add blobs back to the dump with the --blobs
233               switch.
234
235       -N schema
236       --exclude-schema=schema
237           Do not dump any schemas matching the schema pattern. The pattern is
238           interpreted according to the same rules as for -n.  -N can be given
239           more than once to exclude schemas matching any of several patterns.
240
241           When both -n and -N are given, the behavior is to dump just the
242           schemas that match at least one -n switch but no -N switches. If -N
243           appears without -n, then schemas matching -N are excluded from what
244           is otherwise a normal dump.
245
246       -o
247       --oids
248           Dump object identifiers (OIDs) as part of the data for every table.
249           Use this option if your application references the OID columns in
250           some way (e.g., in a foreign key constraint). Otherwise, this
251           option should not be used.
252
253       -O
254       --no-owner
255           Do not output commands to set ownership of objects to match the
256           original database. By default, pg_dump issues ALTER OWNER or SET
257           SESSION AUTHORIZATION statements to set ownership of created
258           database objects. These statements will fail when the script is run
259           unless it is started by a superuser (or the same user that owns all
260           of the objects in the script). To make a script that can be
261           restored by any user, but will give that user ownership of all the
262           objects, specify -O.
263
264           This option is only meaningful for the plain-text format. For the
265           archive formats, you can specify the option when you call
266           pg_restore.
267
268       -R
269       --no-reconnect
270           This option is obsolete but still accepted for backwards
271           compatibility.
272
273       -s
274       --schema-only
275           Dump only the object definitions (schema), not data.
276
277           This option is the inverse of --data-only. It is similar to, but
278           for historical reasons not identical to, specifying
279           --section=pre-data --section=post-data.
280
281           (Do not confuse this with the --schema option, which uses the word
282           “schema” in a different meaning.)
283
284           To exclude table data for only a subset of tables in the database,
285           see --exclude-table-data.
286
287       -S username
288       --superuser=username
289           Specify the superuser user name to use when disabling triggers.
290           This is relevant only if --disable-triggers is used. (Usually, it's
291           better to leave this out, and instead start the resulting script as
292           superuser.)
293
294       -t table
295       --table=table
296           Dump only tables with names matching table. For this purpose,
297           “table” includes views, materialized views, sequences, and foreign
298           tables. Multiple tables can be selected by writing multiple -t
299           switches. Also, the table parameter is interpreted as a pattern
300           according to the same rules used by psql's \d commands (see
301           Patterns), so multiple tables can also be selected by writing
302           wildcard characters in the pattern. When using wildcards, be
303           careful to quote the pattern if needed to prevent the shell from
304           expanding the wildcards; see EXAMPLES.
305
306           The -n and -N switches have no effect when -t is used, because
307           tables selected by -t will be dumped regardless of those switches,
308           and non-table objects will not be dumped.
309
310               Note
311               When -t is specified, pg_dump makes no attempt to dump any
312               other database objects that the selected table(s) might depend
313               upon. Therefore, there is no guarantee that the results of a
314               specific-table dump can be successfully restored by themselves
315               into a clean database.
316
317               Note
318               The behavior of the -t switch is not entirely upward compatible
319               with pre-8.2 PostgreSQL versions. Formerly, writing -t tab
320               would dump all tables named tab, but now it just dumps
321               whichever one is visible in your default search path. To get
322               the old behavior you can write -t '*.tab'. Also, you must write
323               something like -t sch.tab to select a table in a particular
324               schema, rather than the old locution of -n sch -t tab.
325
326       -T table
327       --exclude-table=table
328           Do not dump any tables matching the table pattern. The pattern is
329           interpreted according to the same rules as for -t.  -T can be given
330           more than once to exclude tables matching any of several patterns.
331
332           When both -t and -T are given, the behavior is to dump just the
333           tables that match at least one -t switch but no -T switches. If -T
334           appears without -t, then tables matching -T are excluded from what
335           is otherwise a normal dump.
336
337       -v
338       --verbose
339           Specifies verbose mode. This will cause pg_dump to output detailed
340           object comments and start/stop times to the dump file, and progress
341           messages to standard error.
342
343       -V
344       --version
345           Print the pg_dump version and exit.
346
347       -x
348       --no-privileges
349       --no-acl
350           Prevent dumping of access privileges (grant/revoke commands).
351
352       -Z 0..9
353       --compress=0..9
354           Specify the compression level to use. Zero means no compression.
355           For the custom archive format, this specifies compression of
356           individual table-data segments, and the default is to compress at a
357           moderate level. For plain text output, setting a nonzero
358           compression level causes the entire output file to be compressed,
359           as though it had been fed through gzip; but the default is not to
360           compress. The tar archive format currently does not support
361           compression at all.
362
363       --binary-upgrade
364           This option is for use by in-place upgrade utilities. Its use for
365           other purposes is not recommended or supported. The behavior of the
366           option may change in future releases without notice.
367
368       --column-inserts
369       --attribute-inserts
370           Dump data as INSERT commands with explicit column names (INSERT
371           INTO table (column, ...) VALUES ...). This will make restoration
372           very slow; it is mainly useful for making dumps that can be loaded
373           into non-PostgreSQL databases. However, since this option generates
374           a separate command for each row, an error in reloading a row causes
375           only that row to be lost rather than the entire table contents.
376
377       --disable-dollar-quoting
378           This option disables the use of dollar quoting for function bodies,
379           and forces them to be quoted using SQL standard string syntax.
380
381       --disable-triggers
382           This option is relevant only when creating a data-only dump. It
383           instructs pg_dump to include commands to temporarily disable
384           triggers on the target tables while the data is reloaded. Use this
385           if you have referential integrity checks or other triggers on the
386           tables that you do not want to invoke during data reload.
387
388           Presently, the commands emitted for --disable-triggers must be done
389           as superuser. So, you should also specify a superuser name with -S,
390           or preferably be careful to start the resulting script as a
391           superuser.
392
393           This option is only meaningful for the plain-text format. For the
394           archive formats, you can specify the option when you call
395           pg_restore.
396
397       --enable-row-security
398           This option is relevant only when dumping the contents of a table
399           which has row security. By default, pg_dump will set row_security
400           to off, to ensure that all data is dumped from the table. If the
401           user does not have sufficient privileges to bypass row security,
402           then an error is thrown. This parameter instructs pg_dump to set
403           row_security to on instead, allowing the user to dump the parts of
404           the contents of the table that they have access to.
405
406           Note that if you use this option currently, you probably also want
407           the dump be in INSERT format, as the COPY FROM during restore does
408           not support row security.
409
410       --exclude-table-data=table
411           Do not dump data for any tables matching the table pattern. The
412           pattern is interpreted according to the same rules as for -t.
413           --exclude-table-data can be given more than once to exclude tables
414           matching any of several patterns. This option is useful when you
415           need the definition of a particular table even though you do not
416           need the data in it.
417
418           To exclude data for all tables in the database, see --schema-only.
419
420       --if-exists
421           Use conditional commands (i.e. add an IF EXISTS clause) when
422           cleaning database objects. This option is not valid unless --clean
423           is also specified.
424
425       --inserts
426           Dump data as INSERT commands (rather than COPY). This will make
427           restoration very slow; it is mainly useful for making dumps that
428           can be loaded into non-PostgreSQL databases. However, since this
429           option generates a separate command for each row, an error in
430           reloading a row causes only that row to be lost rather than the
431           entire table contents. Note that the restore might fail altogether
432           if you have rearranged column order. The --column-inserts option is
433           safe against column order changes, though even slower.
434
435       --lock-wait-timeout=timeout
436           Do not wait forever to acquire shared table locks at the beginning
437           of the dump. Instead fail if unable to lock a table within the
438           specified timeout. The timeout may be specified in any of the
439           formats accepted by SET statement_timeout. (Allowed formats vary
440           depending on the server version you are dumping from, but an
441           integer number of milliseconds is accepted by all versions.)
442
443       --no-publications
444           Do not dump publications.
445
446       --no-security-labels
447           Do not dump security labels.
448
449       --no-subscriptions
450           Do not dump subscriptions.
451
452       --no-sync
453           By default, pg_dump will wait for all files to be written safely to
454           disk. This option causes pg_dump to return without waiting, which
455           is faster, but means that a subsequent operating system crash can
456           leave the dump corrupt. Generally, this option is useful for
457           testing but should not be used when dumping data from production
458           installation.
459
460       --no-synchronized-snapshots
461           This option allows running pg_dump -j against a pre-9.2 server, see
462           the documentation of the -j parameter for more details.
463
464       --no-tablespaces
465           Do not output commands to select tablespaces. With this option, all
466           objects will be created in whichever tablespace is the default
467           during restore.
468
469           This option is only meaningful for the plain-text format. For the
470           archive formats, you can specify the option when you call
471           pg_restore.
472
473       --no-unlogged-table-data
474           Do not dump the contents of unlogged tables. This option has no
475           effect on whether or not the table definitions (schema) are dumped;
476           it only suppresses dumping the table data. Data in unlogged tables
477           is always excluded when dumping from a standby server.
478
479       --quote-all-identifiers
480           Force quoting of all identifiers. This option is recommended when
481           dumping a database from a server whose PostgreSQL major version is
482           different from pg_dump's, or when the output is intended to be
483           loaded into a server of a different major version. By default,
484           pg_dump quotes only identifiers that are reserved words in its own
485           major version. This sometimes results in compatibility issues when
486           dealing with servers of other versions that may have slightly
487           different sets of reserved words. Using --quote-all-identifiers
488           prevents such issues, at the price of a harder-to-read dump script.
489
490       --section=sectionname
491           Only dump the named section. The section name can be pre-data,
492           data, or post-data. This option can be specified more than once to
493           select multiple sections. The default is to dump all sections.
494
495           The data section contains actual table data, large-object contents,
496           and sequence values. Post-data items include definitions of
497           indexes, triggers, rules, and constraints other than validated
498           check constraints. Pre-data items include all other data definition
499           items.
500
501       --serializable-deferrable
502           Use a serializable transaction for the dump, to ensure that the
503           snapshot used is consistent with later database states; but do this
504           by waiting for a point in the transaction stream at which no
505           anomalies can be present, so that there isn't a risk of the dump
506           failing or causing other transactions to roll back with a
507           serialization_failure. See Chapter 13 for more information about
508           transaction isolation and concurrency control.
509
510           This option is not beneficial for a dump which is intended only for
511           disaster recovery. It could be useful for a dump used to load a
512           copy of the database for reporting or other read-only load sharing
513           while the original database continues to be updated. Without it the
514           dump may reflect a state which is not consistent with any serial
515           execution of the transactions eventually committed. For example, if
516           batch processing techniques are used, a batch may show as closed in
517           the dump without all of the items which are in the batch appearing.
518
519           This option will make no difference if there are no read-write
520           transactions active when pg_dump is started. If read-write
521           transactions are active, the start of the dump may be delayed for
522           an indeterminate length of time. Once running, performance with or
523           without the switch is the same.
524
525       --snapshot=snapshotname
526           Use the specified synchronized snapshot when making a dump of the
527           database (see Table 9.82 for more details).
528
529           This option is useful when needing to synchronize the dump with a
530           logical replication slot (see Chapter 48) or with a concurrent
531           session.
532
533           In the case of a parallel dump, the snapshot name defined by this
534           option is used rather than taking a new snapshot.
535
536       --strict-names
537           Require that each schema (-n/--schema) and table (-t/--table)
538           qualifier match at least one schema/table in the database to be
539           dumped. Note that if none of the schema/table qualifiers find
540           matches, pg_dump will generate an error even without
541           --strict-names.
542
543           This option has no effect on -N/--exclude-schema,
544           -T/--exclude-table, or --exclude-table-data. An exclude pattern
545           failing to match any objects is not considered an error.
546
547       --use-set-session-authorization
548           Output SQL-standard SET SESSION AUTHORIZATION commands instead of
549           ALTER OWNER commands to determine object ownership. This makes the
550           dump more standards-compatible, but depending on the history of the
551           objects in the dump, might not restore properly. Also, a dump using
552           SET SESSION AUTHORIZATION will certainly require superuser
553           privileges to restore correctly, whereas ALTER OWNER requires
554           lesser privileges.
555
556       -?
557       --help
558           Show help about pg_dump command line arguments, and exit.
559
560       The following command-line options control the database connection
561       parameters.
562
563       -d dbname
564       --dbname=dbname
565           Specifies the name of the database to connect to. This is
566           equivalent to specifying dbname as the first non-option argument on
567           the command line.
568
569           If this parameter contains an = sign or starts with a valid URI
570           prefix (postgresql:// or postgres://), it is treated as a conninfo
571           string. See Section 33.1 for more information.
572
573       -h host
574       --host=host
575           Specifies the host name of the machine on which the server is
576           running. If the value begins with a slash, it is used as the
577           directory for the Unix domain socket. The default is taken from the
578           PGHOST environment variable, if set, else a Unix domain socket
579           connection is attempted.
580
581       -p port
582       --port=port
583           Specifies the TCP port or local Unix domain socket file extension
584           on which the server is listening for connections. Defaults to the
585           PGPORT environment variable, if set, or a compiled-in default.
586
587       -U username
588       --username=username
589           User name to connect as.
590
591       -w
592       --no-password
593           Never issue a password prompt. If the server requires password
594           authentication and a password is not available by other means such
595           as a .pgpass file, the connection attempt will fail. This option
596           can be useful in batch jobs and scripts where no user is present to
597           enter a password.
598
599       -W
600       --password
601           Force pg_dump to prompt for a password before connecting to a
602           database.
603
604           This option is never essential, since pg_dump will automatically
605           prompt for a password if the server demands password
606           authentication. However, pg_dump will waste a connection attempt
607           finding out that the server wants a password. In some cases it is
608           worth typing -W to avoid the extra connection attempt.
609
610       --role=rolename
611           Specifies a role name to be used to create the dump. This option
612           causes pg_dump to issue a SET ROLE rolename command after
613           connecting to the database. It is useful when the authenticated
614           user (specified by -U) lacks privileges needed by pg_dump, but can
615           switch to a role with the required rights. Some installations have
616           a policy against logging in directly as a superuser, and use of
617           this option allows dumps to be made without violating the policy.
618

ENVIRONMENT

620       PGDATABASE
621       PGHOST
622       PGOPTIONS
623       PGPORT
624       PGUSER
625           Default connection parameters.
626
627       This utility, like most other PostgreSQL utilities, also uses the
628       environment variables supported by libpq (see Section 33.14).
629

DIAGNOSTICS

631       pg_dump internally executes SELECT statements. If you have problems
632       running pg_dump, make sure you are able to select information from the
633       database using, for example, psql(1). Also, any default connection
634       settings and environment variables used by the libpq front-end library
635       will apply.
636
637       The database activity of pg_dump is normally collected by the
638       statistics collector. If this is undesirable, you can set parameter
639       track_counts to false via PGOPTIONS or the ALTER USER command.
640

NOTES

642       If your database cluster has any local additions to the template1
643       database, be careful to restore the output of pg_dump into a truly
644       empty database; otherwise you are likely to get errors due to duplicate
645       definitions of the added objects. To make an empty database without any
646       local additions, copy from template0 not template1, for example:
647
648           CREATE DATABASE foo WITH TEMPLATE template0;
649
650       When a data-only dump is chosen and the option --disable-triggers is
651       used, pg_dump emits commands to disable triggers on user tables before
652       inserting the data, and then commands to re-enable them after the data
653       has been inserted. If the restore is stopped in the middle, the system
654       catalogs might be left in the wrong state.
655
656       The dump file produced by pg_dump does not contain the statistics used
657       by the optimizer to make query planning decisions. Therefore, it is
658       wise to run ANALYZE after restoring from a dump file to ensure optimal
659       performance; see Section 24.1.3 and Section 24.1.6 for more
660       information. The dump file also does not contain any ALTER DATABASE ...
661       SET commands; these settings are dumped by pg_dumpall(1), along with
662       database users and other installation-wide settings.
663
664       Because pg_dump is used to transfer data to newer versions of
665       PostgreSQL, the output of pg_dump can be expected to load into
666       PostgreSQL server versions newer than pg_dump's version.  pg_dump can
667       also dump from PostgreSQL servers older than its own version.
668       (Currently, servers back to version 8.0 are supported.) However,
669       pg_dump cannot dump from PostgreSQL servers newer than its own major
670       version; it will refuse to even try, rather than risk making an invalid
671       dump. Also, it is not guaranteed that pg_dump's output can be loaded
672       into a server of an older major version — not even if the dump was
673       taken from a server of that version. Loading a dump file into an older
674       server may require manual editing of the dump file to remove syntax not
675       understood by the older server. Use of the --quote-all-identifiers
676       option is recommended in cross-version cases, as it can prevent
677       problems arising from varying reserved-word lists in different
678       PostgreSQL versions.
679
680       When dumping logical replication subscriptions, pg_dump will generate
681       CREATE SUBSCRIPTION commands that use the connect = false option, so
682       that restoring the subscription does not make remote connections for
683       creating a replication slot or for initial table copy. That way, the
684       dump can be restored without requiring network access to the remote
685       servers. It is then up to the user to reactivate the subscriptions in a
686       suitable way. If the involved hosts have changed, the connection
687       information might have to be changed. It might also be appropriate to
688       truncate the target tables before initiating a new full table copy.
689

EXAMPLES

691       To dump a database called mydb into a SQL-script file:
692
693           $ pg_dump mydb > db.sql
694
695       To reload such a script into a (freshly created) database named newdb:
696
697           $ psql -d newdb -f db.sql
698
699       To dump a database into a custom-format archive file:
700
701           $ pg_dump -Fc mydb > db.dump
702
703       To dump a database into a directory-format archive:
704
705           $ pg_dump -Fd mydb -f dumpdir
706
707       To dump a database into a directory-format archive in parallel with 5
708       worker jobs:
709
710           $ pg_dump -Fd mydb -j 5 -f dumpdir
711
712       To reload an archive file into a (freshly created) database named
713       newdb:
714
715           $ pg_restore -d newdb db.dump
716
717       To dump a single table named mytab:
718
719           $ pg_dump -t mytab mydb > db.sql
720
721       To dump all tables whose names start with emp in the detroit schema,
722       except for the table named employee_log:
723
724           $ pg_dump -t 'detroit.emp*' -T detroit.employee_log mydb > db.sql
725
726       To dump all schemas whose names start with east or west and end in gsm,
727       excluding any schemas whose names contain the word test:
728
729           $ pg_dump -n 'east*gsm' -n 'west*gsm' -N '*test*' mydb > db.sql
730
731       The same, using regular expression notation to consolidate the
732       switches:
733
734           $ pg_dump -n '(east|west)*gsm' -N '*test*' mydb > db.sql
735
736       To dump all database objects except for tables whose names begin with
737       ts_:
738
739           $ pg_dump -T 'ts_*' mydb > db.sql
740
741       To specify an upper-case or mixed-case name in -t and related switches,
742       you need to double-quote the name; else it will be folded to lower case
743       (see Patterns). But double quotes are special to the shell, so in turn
744       they must be quoted. Thus, to dump a single table with a mixed-case
745       name, you need something like
746
747           $ pg_dump -t "\"MixedCaseName\"" mydb > mytab.sql
748

SEE ALSO

750       pg_dumpall(1), pg_restore(1), psql(1)
751
752
753
754PostgreSQL 10.7                      2019                           PG_DUMP(1)
Impressum