1PG_DUMP(1) PostgreSQL Client Applications PG_DUMP(1)
2
3
4
6 pg_dump - extract a PostgreSQL database into a script file or other ar‐
7 chive file
8
9
11 pg_dump [ option... ] [ dbname ]
12
14 pg_dump is a utility for backing up a PostgreSQL database. It makes
15 consistent backups even if the database is being used concurrently.
16 pg_dump does not block other users accessing the database (readers or
17 writers).
18
19 Dumps can be output in script or archive file formats. Script dumps are
20 plain-text files containing the SQL commands required to reconstruct
21 the database to the state it was in at the time it was saved. To
22 restore from such a script, feed it to psql(1). Script files can be
23 used to reconstruct the database even on other machines and other
24 architectures; with some modifications even on other SQL database prod‐
25 ucts.
26
27 The alternative archive file formats must be used with pg_restore(1) to
28 rebuild the database. They allow pg_restore to be selective about what
29 is restored, or even to reorder the items prior to being restored. The
30 archive file formats are designed to be portable across architectures.
31
32 When used with one of the archive file formats and combined with
33 pg_restore, pg_dump provides a flexible archival and transfer mecha‐
34 nism. pg_dump can be used to backup an entire database, then pg_restore
35 can be used to examine the archive and/or select which parts of the
36 database are to be restored. The most flexible output file format is
37 the ``custom'' format (-Fc). It allows for selection and reordering of
38 all archived items, and is compressed by default. The tar format (-Ft)
39 is not compressed and it is not possible to reorder data when loading,
40 but it is otherwise quite flexible; moreover, it can be manipulated
41 with standard Unix tools such as tar.
42
43 While running pg_dump, one should examine the output for any warnings
44 (printed on standard error), especially in light of the limitations
45 listed below.
46
48 The following command-line options control the content and format of
49 the output.
50
51 dbname Specifies the name of the database to be dumped. If this is not
52 specified, the environment variable PGDATABASE is used. If that
53 is not set, the user name specified for the connection is used.
54
55 -a
56
57 --data-only
58 Dump only the data, not the schema (data definitions).
59
60 This option is only meaningful for the plain-text format. For
61 the archive formats, you may specify the option when you call
62 pg_restore.
63
64 -b
65
66 --blobs
67 Include large objects in the dump. This is the default behavior
68 except when --schema, --table, or --schema-only is specified, so
69 the -b switch is only useful to add large objects to selective
70 dumps.
71
72 -c
73
74 --clean
75 Output commands to clean (drop) database objects prior to (the
76 commands for) creating them.
77
78 This option is only meaningful for the plain-text format. For
79 the archive formats, you may specify the option when you call
80 pg_restore.
81
82 -C
83
84 --create
85 Begin the output with a command to create the database itself
86 and reconnect to the created database. (With a script of this
87 form, it doesn't matter which database you connect to before
88 running the script.)
89
90 This option is only meaningful for the plain-text format. For
91 the archive formats, you may specify the option when you call
92 pg_restore.
93
94 -d
95
96 --inserts
97 Dump data as INSERT commands (rather than COPY). This will make
98 restoration very slow; it is mainly useful for making dumps that
99 can be loaded into non-PostgreSQL databases. Also, since this
100 option generates a separate command for each row, an error in
101 reloading a row causes only that row to be lost rather than the
102 entire table contents. Note that the restore may fail alto‐
103 gether if you have rearranged column order. The -D option is
104 safe against column order changes, though even slower.
105
106 -D
107
108 --column-inserts
109
110 --attribute-inserts
111 Dump data as INSERT commands with explicit column names (INSERT
112 INTO table (column, ...) VALUES ...). This will make restoration
113 very slow; it is mainly useful for making dumps that can be
114 loaded into non-PostgreSQL databases. Also, since this option
115 generates a separate command for each row, an error in reloading
116 a row causes only that row to be lost rather than the entire ta‐
117 ble contents.
118
119 -E encoding
120
121 --encoding=encoding
122 Create the dump in the specified character set encoding. By
123 default, the dump is created in the database encoding. (Another
124 way to get the same result is to set the PGCLIENTENCODING envi‐
125 ronment variable to the desired dump encoding.)
126
127 -f file
128
129 --file=file
130 Send output to the specified file. If this is omitted, the stan‐
131 dard output is used.
132
133 -F format
134
135 --format=format
136 Selects the format of the output. format can be one of the fol‐
137 lowing:
138
139 p
140
141 plain Output a plain-text SQL script file (the default).
142
143 c
144
145 custom Output a custom archive suitable for input into
146 pg_restore. This is the most flexible format in that it
147 allows reordering of loading data as well as object defi‐
148 nitions. This format is also compressed by default.
149
150 t
151
152 tar Output a tar archive suitable for input into pg_restore.
153 Using this archive format allows reordering and/or exclu‐
154 sion of database objects at the time the database is
155 restored. It is also possible to limit which data is
156 reloaded at restore time.
157
158 -i
159
160 --ignore-version
161 Ignore version mismatch between pg_dump and the database server.
162
163 pg_dump can dump from servers running previous releases of Post‐
164 greSQL, but very old versions are not supported anymore (cur‐
165 rently, those prior to 7.0). Dumping from a server newer than
166 pg_dump is likely not to work at all. Use this option if you
167 need to override the version check (and if pg_dump then fails,
168 don't say you weren't warned).
169
170 -n schema
171
172 --schema=schema
173 Dump only schemas matching schema; this selects both the schema
174 itself, and all its contained objects. When this option is not
175 specified, all non-system schemas in the target database will be
176 dumped. Multiple schemas can be selected by writing multiple -n
177 switches. Also, the schema parameter is interpreted as a pattern
178 according to the same rules used by psql's \d commands (see Pat‐
179 terns [psql(1)]), so multiple schemas can also be selected by
180 writing wildcard characters in the pattern. When using wild‐
181 cards, be careful to quote the pattern if needed to prevent the
182 shell from expanding the wildcards.
183
184 Note: When -n is specified, pg_dump makes no attempt to dump any
185 other database objects that the selected schema(s) may depend
186 upon. Therefore, there is no guarantee that the results of a
187 specific-schema dump can be successfully restored by themselves
188 into a clean database.
189
190
191 Note: Non-schema objects such as blobs are not dumped when -n is
192 specified. You can add blobs back to the dump with the --blobs
193 switch.
194
195
196 -N schema
197
198 --exclude-schema=schema
199 Do not dump any schemas matching the schema pattern. The pattern
200 is interpreted according to the same rules as for -n. -N can be
201 given more than once to exclude schemas matching any of several
202 patterns.
203
204 When both -n and -N are given, the behavior is to dump just the
205 schemas that match at least one -n switch but no -N switches. If
206 -N appears without -n, then schemas matching -N are excluded
207 from what is otherwise a normal dump.
208
209 -o
210
211 --oids Dump object identifiers (OIDs) as part of the data for every ta‐
212 ble. Use this option if your application references the OID col‐
213 umns in some way (e.g., in a foreign key constraint). Other‐
214 wise, this option should not be used.
215
216 -O
217
218 --no-owner
219 Do not output commands to set ownership of objects to match the
220 original database. By default, pg_dump issues ALTER OWNER or
221 SET SESSION AUTHORIZATION statements to set ownership of created
222 database objects. These statements will fail when the script is
223 run unless it is started by a superuser (or the same user that
224 owns all of the objects in the script). To make a script that
225 can be restored by any user, but will give that user ownership
226 of all the objects, specify -O.
227
228 This option is only meaningful for the plain-text format. For
229 the archive formats, you may specify the option when you call
230 pg_restore.
231
232 -R
233
234 --no-reconnect
235 This option is obsolete but still accepted for backwards compat‐
236 ibility.
237
238 -s
239
240 --schema-only
241 Dump only the object definitions (schema), not data.
242
243 -S username
244
245 --superuser=username
246 Specify the superuser user name to use when disabling triggers.
247 This is only relevant if --disable-triggers is used. (Usually,
248 it's better to leave this out, and instead start the resulting
249 script as superuser.)
250
251 -t table
252
253 --table=table
254 Dump only tables (or views or sequences) matching table. Multi‐
255 ple tables can be selected by writing multiple -t switches.
256 Also, the table parameter is interpreted as a pattern according
257 to the same rules used by psql's \d commands (see Patterns
258 [psql(1)]), so multiple tables can also be selected by writing
259 wildcard characters in the pattern. When using wildcards, be
260 careful to quote the pattern if needed to prevent the shell from
261 expanding the wildcards.
262
263 The -n and -N switches have no effect when -t is used, because
264 tables selected by -t will be dumped regardless of those
265 switches, and non-table objects will not be dumped.
266
267 Note: When -t is specified, pg_dump makes no attempt to dump any
268 other database objects that the selected table(s) may depend
269 upon. Therefore, there is no guarantee that the results of a
270 specific-table dump can be successfully restored by themselves
271 into a clean database.
272
273
274 Note: The behavior of the -t switch is not entirely upward com‐
275 patible with pre-8.2 PostgreSQL versions. Formerly, writing -t
276 tab would dump all tables named tab, but now it just dumps
277 whichever one is visible in your default search path. To get the
278 old behavior you can write -t '*.tab'. Also, you must write
279 something like -t sch.tab to select a table in a particular
280 schema, rather than the old locution of -n sch -t tab.
281
282
283 -T table
284
285 --exclude-table=table
286 Do not dump any tables matching the table pattern. The pattern
287 is interpreted according to the same rules as for -t. -T can be
288 given more than once to exclude tables matching any of several
289 patterns.
290
291 When both -t and -T are given, the behavior is to dump just the
292 tables that match at least one -t switch but no -T switches. If
293 -T appears without -t, then tables matching -T are excluded from
294 what is otherwise a normal dump.
295
296 -v
297
298 --verbose
299 Specifies verbose mode. This will cause pg_dump to output
300 detailed object comments and start/stop times to the dump file,
301 and progress messages to standard error.
302
303 -x
304
305 --no-privileges
306
307 --no-acl
308 Prevent dumping of access privileges (grant/revoke commands).
309
310 --disable-dollar-quoting
311 This option disables the use of dollar quoting for function bod‐
312 ies, and forces them to be quoted using SQL standard string syn‐
313 tax.
314
315 --disable-triggers
316 This option is only relevant when creating a data-only dump. It
317 instructs pg_dump to include commands to temporarily disable
318 triggers on the target tables while the data is reloaded. Use
319 this if you have referential integrity checks or other triggers
320 on the tables that you do not want to invoke during data reload.
321
322 Presently, the commands emitted for --disable-triggers must be
323 done as superuser. So, you should also specify a superuser name
324 with -S, or preferably be careful to start the resulting script
325 as a superuser.
326
327 This option is only meaningful for the plain-text format. For
328 the archive formats, you may specify the option when you call
329 pg_restore.
330
331 --use-set-session-authorization
332 Output SQL-standard SET SESSION AUTHORIZATION commands instead
333 of ALTER OWNER commands to determine object ownership. This
334 makes the dump more standards compatible, but depending on the
335 history of the objects in the dump, may not restore properly.
336 Also, a dump using SET SESSION AUTHORIZATION will certainly
337 require superuser privileges to restore correctly, whereas ALTER
338 OWNER requires lesser privileges.
339
340 -Z 0..9
341
342 --compress=0..9
343 Specify the compression level to use in archive formats that
344 support compression. (Currently only the custom archive format
345 supports compression.)
346
347 The following command-line options control the database connection
348 parameters.
349
350 -h host
351
352 --host=host
353 Specifies the host name of the machine on which the server is
354 running. If the value begins with a slash, it is used as the
355 directory for the Unix domain socket. The default is taken from
356 the PGHOST environment variable, if set, else a Unix domain
357 socket connection is attempted.
358
359 -p port
360
361 --port=port
362 Specifies the TCP port or local Unix domain socket file exten‐
363 sion on which the server is listening for connections. Defaults
364 to the PGPORT environment variable, if set, or a compiled-in
365 default.
366
367 -U username
368 Connect as the given user
369
370 -W Force a password prompt. This should happen automatically if the
371 server requires password authentication.
372
374 PGDATABASE
375
376 PGHOST
377
378 PGPORT
379
380 PGUSER Default connection parameters.
381
382 This utility, like most other PostgreSQL utilities, also uses the envi‐
383 ronment variables supported by libpq (see in the documentation).
384
386 pg_dump internally executes SELECT statements. If you have problems
387 running pg_dump, make sure you are able to select information from the
388 database using, for example, psql(1). Also, any default connection set‐
389 tings and environment variables used by the libpq front-end library
390 will apply.
391
393 If your database cluster has any local additions to the template1 data‐
394 base, be careful to restore the output of pg_dump into a truly empty
395 database; otherwise you are likely to get errors due to duplicate defi‐
396 nitions of the added objects. To make an empty database without any
397 local additions, copy from template0 not template1, for example:
398
399 CREATE DATABASE foo WITH TEMPLATE template0;
400
401
402 pg_dump has a few limitations:
403
404 · When a data-only dump is chosen and the option --disable-triggers is
405 used, pg_dump emits commands to disable triggers on user tables
406 before inserting the data and commands to re-enable them after the
407 data has been inserted. If the restore is stopped in the middle, the
408 system catalogs may be left in the wrong state.
409
410 Members of tar archives are limited to a size less than 8 GB. (This is
411 an inherent limitation of the tar file format.) Therefore this format
412 cannot be used if the textual representation of any one table exceeds
413 that size. The total size of a tar archive and any of the other output
414 formats is not limited, except possibly by the operating system.
415
416 The dump file produced by pg_dump does not contain the statistics used
417 by the optimizer to make query planning decisions. Therefore, it is
418 wise to run ANALYZE after restoring from a dump file to ensure good
419 performance.
420
421 Because pg_dump is used to transfer data to newer versions of Post‐
422 greSQL, the output of pg_dump can be loaded into newer PostgreSQL data‐
423 bases. It also can read older PostgreSQL databases. However, it usually
424 cannot read newer PostgreSQL databases or produce dump output that can
425 be loaded into older database versions. To do this, manual editing of
426 the dump file might be required.
427
429 To dump a database called mydb into a SQL-script file:
430
431 $ pg_dump mydb > db.sql
432
433
434 To reload such a script into a (freshly created) database named newdb:
435
436 $ psql -d newdb -f db.sql
437
438
439 To dump a database into a custom-format archive file:
440
441 $ pg_dump -Fc mydb > db.dump
442
443
444 To reload an archive file into a (freshly created) database named
445 newdb:
446
447 $ pg_restore -d newdb db.dump
448
449
450 To dump a single table named mytab:
451
452 $ pg_dump -t mytab mydb > db.sql
453
454
455 To dump all tables whose names start with emp in the detroit schema,
456 except for the table named employee_log:
457
458 $ pg_dump -t 'detroit.emp*' -T detroit.employee_log mydb > db.sql
459
460
461 To dump all schemas whose names start with east or west and end in gsm,
462 excluding any schemas whose names contain the word test:
463
464 $ pg_dump -n 'east*gsm' -n 'west*gsm' -N '*test*' mydb > db.sql
465
466
467 The same, using regular expression notation to consolidate the
468 switches:
469
470 $ pg_dump -n '(east|west)*gsm' -N '*test*' mydb > db.sql
471
472
473 To dump all database objects except for tables whose names begin with
474 ts_:
475
476 $ pg_dump -T 'ts_*' mydb > db.sql
477
478
479 To specify an upper-case or mixed-case name in -t and related switches,
480 you need to double-quote the name; else it will be folded to lower case
481 (see Patterns [psql(1)]). But double quotes are special to the shell,
482 so in turn they must be quoted. Thus, to dump a single table with a
483 mixed-case name, you need something like
484
485 $ pg_dump -t '"MixedCaseName"' mydb > mytab.sql
486
487
489 The pg_dump utility first appeared in Postgres95 release 0.02. The non-
490 plain-text output formats were introduced in PostgreSQL release 7.1.
491
493 pg_dumpall(1), pg_restore(1), psql(1)
494
495
496
497Application 2008-06-08 PG_DUMP(1)