1PG_DUMPALL(1) PostgreSQL 16.1 Documentation PG_DUMPALL(1)
2
3
4
6 pg_dumpall - extract a PostgreSQL database cluster into a script file
7
9 pg_dumpall [connection-option...] [option...]
10
12 pg_dumpall is a utility for writing out (“dumping”) all PostgreSQL
13 databases of a cluster into one script file. The script file contains
14 SQL commands that can be used as input to psql(1) to restore the
15 databases. It does this by calling pg_dump(1) for each database in the
16 cluster. pg_dumpall also dumps global objects that are common to all
17 databases, namely database roles, tablespaces, and privilege grants for
18 configuration parameters. (pg_dump does not save these objects.)
19
20 Since pg_dumpall reads tables from all databases you will most likely
21 have to connect as a database superuser in order to produce a complete
22 dump. Also you will need superuser privileges to execute the saved
23 script in order to be allowed to add roles and create databases.
24
25 The SQL script will be written to the standard output. Use the
26 -f/--file option or shell operators to redirect it into a file.
27
28 pg_dumpall needs to connect several times to the PostgreSQL server
29 (once per database). If you use password authentication it will ask for
30 a password each time. It is convenient to have a ~/.pgpass file in such
31 cases. See Section 34.16 for more information.
32
34 The following command-line options control the content and format of
35 the output.
36
37 -a
38 --data-only
39 Dump only the data, not the schema (data definitions).
40
41 -c
42 --clean
43 Emit SQL commands to DROP all the dumped databases, roles, and
44 tablespaces before recreating them. This option is useful when the
45 restore is to overwrite an existing cluster. If any of the objects
46 do not exist in the destination cluster, ignorable error messages
47 will be reported during restore, unless --if-exists is also
48 specified.
49
50 -E encoding
51 --encoding=encoding
52 Create the dump in the specified character set encoding. By
53 default, the dump is created in the database encoding. (Another way
54 to get the same result is to set the PGCLIENTENCODING environment
55 variable to the desired dump encoding.)
56
57 -f filename
58 --file=filename
59 Send output to the specified file. If this is omitted, the standard
60 output is used.
61
62 -g
63 --globals-only
64 Dump only global objects (roles and tablespaces), no databases.
65
66 -O
67 --no-owner
68 Do not output commands to set ownership of objects to match the
69 original database. By default, pg_dumpall issues ALTER OWNER or SET
70 SESSION AUTHORIZATION statements to set ownership of created schema
71 elements. These statements will fail when the script is run unless
72 it is started by a superuser (or the same user that owns all of the
73 objects in the script). To make a script that can be restored by
74 any user, but will give that user ownership of all the objects,
75 specify -O.
76
77 -r
78 --roles-only
79 Dump only roles, no databases or tablespaces.
80
81 -s
82 --schema-only
83 Dump only the object definitions (schema), not data.
84
85 -S username
86 --superuser=username
87 Specify the superuser user name to use when disabling triggers.
88 This is relevant only if --disable-triggers is used. (Usually, it's
89 better to leave this out, and instead start the resulting script as
90 superuser.)
91
92 -t
93 --tablespaces-only
94 Dump only tablespaces, no databases or roles.
95
96 -v
97 --verbose
98 Specifies verbose mode. This will cause pg_dumpall to output
99 start/stop times to the dump file, and progress messages to
100 standard error. Repeating the option causes additional debug-level
101 messages to appear on standard error. The option is also passed
102 down to pg_dump.
103
104 -V
105 --version
106 Print the pg_dumpall version and exit.
107
108 -x
109 --no-privileges
110 --no-acl
111 Prevent dumping of access privileges (grant/revoke commands).
112
113 --binary-upgrade
114 This option is for use by in-place upgrade utilities. Its use for
115 other purposes is not recommended or supported. The behavior of the
116 option may change in future releases without notice.
117
118 --column-inserts
119 --attribute-inserts
120 Dump data as INSERT commands with explicit column names (INSERT
121 INTO table (column, ...) VALUES ...). This will make restoration
122 very slow; it is mainly useful for making dumps that can be loaded
123 into non-PostgreSQL databases.
124
125 --disable-dollar-quoting
126 This option disables the use of dollar quoting for function bodies,
127 and forces them to be quoted using SQL standard string syntax.
128
129 --disable-triggers
130 This option is relevant only when creating a data-only dump. It
131 instructs pg_dumpall to include commands to temporarily disable
132 triggers on the target tables while the data is restored. Use this
133 if you have referential integrity checks or other triggers on the
134 tables that you do not want to invoke during data restore.
135
136 Presently, the commands emitted for --disable-triggers must be done
137 as superuser. So, you should also specify a superuser name with -S,
138 or preferably be careful to start the resulting script as a
139 superuser.
140
141 --exclude-database=pattern
142 Do not dump databases whose name matches pattern. Multiple patterns
143 can be excluded by writing multiple --exclude-database switches.
144 The pattern parameter is interpreted as a pattern according to the
145 same rules used by psql's \d commands (see Patterns), so multiple
146 databases can also be excluded by writing wildcard characters in
147 the pattern. When using wildcards, be careful to quote the pattern
148 if needed to prevent shell wildcard expansion.
149
150 --extra-float-digits=ndigits
151 Use the specified value of extra_float_digits when dumping
152 floating-point data, instead of the maximum available precision.
153 Routine dumps made for backup purposes should not use this option.
154
155 --if-exists
156 Use DROP ... IF EXISTS commands to drop objects in --clean mode.
157 This suppresses “does not exist” errors that might otherwise be
158 reported. This option is not valid unless --clean is also
159 specified.
160
161 --inserts
162 Dump data as INSERT commands (rather than COPY). This will make
163 restoration very slow; it is mainly useful for making dumps that
164 can be loaded into non-PostgreSQL databases. Note that the restore
165 might fail altogether if you have rearranged column order. The
166 --column-inserts option is safer, though even slower.
167
168 --load-via-partition-root
169 When dumping data for a table partition, make the COPY or INSERT
170 statements target the root of the partitioning hierarchy that
171 contains it, rather than the partition itself. This causes the
172 appropriate partition to be re-determined for each row when the
173 data is loaded. This may be useful when restoring data on a server
174 where rows do not always fall into the same partitions as they did
175 on the original server. That could happen, for example, if the
176 partitioning column is of type text and the two systems have
177 different definitions of the collation used to sort the
178 partitioning column.
179
180 --lock-wait-timeout=timeout
181 Do not wait forever to acquire shared table locks at the beginning
182 of the dump. Instead, fail if unable to lock a table within the
183 specified timeout. The timeout may be specified in any of the
184 formats accepted by SET statement_timeout.
185
186 --no-comments
187 Do not dump comments.
188
189 --no-publications
190 Do not dump publications.
191
192 --no-role-passwords
193 Do not dump passwords for roles. When restored, roles will have a
194 null password, and password authentication will always fail until
195 the password is set. Since password values aren't needed when this
196 option is specified, the role information is read from the catalog
197 view pg_roles instead of pg_authid. Therefore, this option also
198 helps if access to pg_authid is restricted by some security policy.
199
200 --no-security-labels
201 Do not dump security labels.
202
203 --no-subscriptions
204 Do not dump subscriptions.
205
206 --no-sync
207 By default, pg_dumpall will wait for all files to be written safely
208 to disk. This option causes pg_dumpall to return without waiting,
209 which is faster, but means that a subsequent operating system crash
210 can leave the dump corrupt. Generally, this option is useful for
211 testing but should not be used when dumping data from production
212 installation.
213
214 --no-table-access-method
215 Do not output commands to select table access methods. With this
216 option, all objects will be created with whichever table access
217 method is the default during restore.
218
219 --no-tablespaces
220 Do not output commands to create tablespaces nor select tablespaces
221 for objects. With this option, all objects will be created in
222 whichever tablespace is the default during restore.
223
224 --no-toast-compression
225 Do not output commands to set TOAST compression methods. With this
226 option, all columns will be restored with the default compression
227 setting.
228
229 --no-unlogged-table-data
230 Do not dump the contents of unlogged tables. This option has no
231 effect on whether or not the table definitions (schema) are dumped;
232 it only suppresses dumping the table data.
233
234 --on-conflict-do-nothing
235 Add ON CONFLICT DO NOTHING to INSERT commands. This option is not
236 valid unless --inserts or --column-inserts is also specified.
237
238 --quote-all-identifiers
239 Force quoting of all identifiers. This option is recommended when
240 dumping a database from a server whose PostgreSQL major version is
241 different from pg_dumpall's, or when the output is intended to be
242 loaded into a server of a different major version. By default,
243 pg_dumpall quotes only identifiers that are reserved words in its
244 own major version. This sometimes results in compatibility issues
245 when dealing with servers of other versions that may have slightly
246 different sets of reserved words. Using --quote-all-identifiers
247 prevents such issues, at the price of a harder-to-read dump script.
248
249 --rows-per-insert=nrows
250 Dump data as INSERT commands (rather than COPY). Controls the
251 maximum number of rows per INSERT command. The value specified must
252 be a number greater than zero. Any error during restoring will
253 cause only rows that are part of the problematic INSERT to be lost,
254 rather than the entire table contents.
255
256 --use-set-session-authorization
257 Output SQL-standard SET SESSION AUTHORIZATION commands instead of
258 ALTER OWNER commands to determine object ownership. This makes the
259 dump more standards compatible, but depending on the history of the
260 objects in the dump, might not restore properly.
261
262 -?
263 --help
264 Show help about pg_dumpall command line arguments, and exit.
265
266 The following command-line options control the database connection
267 parameters.
268
269 -d connstr
270 --dbname=connstr
271 Specifies parameters used to connect to the server, as a connection
272 string; these will override any conflicting command line options.
273
274 The option is called --dbname for consistency with other client
275 applications, but because pg_dumpall needs to connect to many
276 databases, the database name in the connection string will be
277 ignored. Use the -l option to specify the name of the database used
278 for the initial connection, which will dump global objects and
279 discover what other databases should be dumped.
280
281 -h host
282 --host=host
283 Specifies the host name of the machine on which the database server
284 is running. If the value begins with a slash, it is used as the
285 directory for the Unix domain socket. The default is taken from the
286 PGHOST environment variable, if set, else a Unix domain socket
287 connection is attempted.
288
289 -l dbname
290 --database=dbname
291 Specifies the name of the database to connect to for dumping global
292 objects and discovering what other databases should be dumped. If
293 not specified, the postgres database will be used, and if that does
294 not exist, template1 will be used.
295
296 -p port
297 --port=port
298 Specifies the TCP port or local Unix domain socket file extension
299 on which the server is listening for connections. Defaults to the
300 PGPORT environment variable, if set, or a compiled-in default.
301
302 -U username
303 --username=username
304 User name to connect as.
305
306 -w
307 --no-password
308 Never issue a password prompt. If the server requires password
309 authentication and a password is not available by other means such
310 as a .pgpass file, the connection attempt will fail. This option
311 can be useful in batch jobs and scripts where no user is present to
312 enter a password.
313
314 -W
315 --password
316 Force pg_dumpall to prompt for a password before connecting to a
317 database.
318
319 This option is never essential, since pg_dumpall will automatically
320 prompt for a password if the server demands password
321 authentication. However, pg_dumpall will waste a connection attempt
322 finding out that the server wants a password. In some cases it is
323 worth typing -W to avoid the extra connection attempt.
324
325 Note that the password prompt will occur again for each database to
326 be dumped. Usually, it's better to set up a ~/.pgpass file than to
327 rely on manual password entry.
328
329 --role=rolename
330 Specifies a role name to be used to create the dump. This option
331 causes pg_dumpall to issue a SET ROLE rolename command after
332 connecting to the database. It is useful when the authenticated
333 user (specified by -U) lacks privileges needed by pg_dumpall, but
334 can switch to a role with the required rights. Some installations
335 have a policy against logging in directly as a superuser, and use
336 of this option allows dumps to be made without violating the
337 policy.
338
340 PGHOST
341 PGOPTIONS
342 PGPORT
343 PGUSER
344 Default connection parameters
345
346 PG_COLOR
347 Specifies whether to use color in diagnostic messages. Possible
348 values are always, auto and never.
349
350 This utility, like most other PostgreSQL utilities, also uses the
351 environment variables supported by libpq (see Section 34.15).
352
354 Since pg_dumpall calls pg_dump internally, some diagnostic messages
355 will refer to pg_dump.
356
357 The --clean option can be useful even when your intention is to restore
358 the dump script into a fresh cluster. Use of --clean authorizes the
359 script to drop and re-create the built-in postgres and template1
360 databases, ensuring that those databases will retain the same
361 properties (for instance, locale and encoding) that they had in the
362 source cluster. Without the option, those databases will retain their
363 existing database-level properties, as well as any pre-existing
364 contents.
365
366 Once restored, it is wise to run ANALYZE on each database so the
367 optimizer has useful statistics. You can also run vacuumdb -a -z to
368 analyze all databases.
369
370 The dump script should not be expected to run completely without
371 errors. In particular, because the script will issue CREATE ROLE for
372 every role existing in the source cluster, it is certain to get a “role
373 already exists” error for the bootstrap superuser, unless the
374 destination cluster was initialized with a different bootstrap
375 superuser name. This error is harmless and should be ignored. Use of
376 the --clean option is likely to produce additional harmless error
377 messages about non-existent objects, although you can minimize those by
378 adding --if-exists.
379
380 pg_dumpall requires all needed tablespace directories to exist before
381 the restore; otherwise, database creation will fail for databases in
382 non-default locations.
383
385 To dump all databases:
386
387 $ pg_dumpall > db.out
388
389 To restore database(s) from this file, you can use:
390
391 $ psql -f db.out postgres
392
393 It is not important to which database you connect here since the script
394 file created by pg_dumpall will contain the appropriate commands to
395 create and connect to the saved databases. An exception is that if you
396 specified --clean, you must connect to the postgres database initially;
397 the script will attempt to drop other databases immediately, and that
398 will fail for the database you are connected to.
399
401 Check pg_dump(1) for details on possible error conditions.
402
403
404
405PostgreSQL 16.1 2023 PG_DUMPALL(1)