1PG_DUMPALL(1) PostgreSQL 11.3 Documentation PG_DUMPALL(1)
2
3
4
6 pg_dumpall - extract a PostgreSQL database cluster into a script file
7
9 pg_dumpall [connection-option...] [option...]
10
12 pg_dumpall is a utility for writing out (“dumping”) all PostgreSQL
13 databases of a cluster into one script file. The script file contains
14 SQL commands that can be used as input to psql(1) to restore the
15 databases. It does this by calling pg_dump(1) for each database in the
16 cluster. pg_dumpall also dumps global objects that are common to all
17 databases, that is, database roles and tablespaces. (pg_dump does not
18 save these objects.)
19
20 Since pg_dumpall reads tables from all databases you will most likely
21 have to connect as a database superuser in order to produce a complete
22 dump. Also you will need superuser privileges to execute the saved
23 script in order to be allowed to add roles and create databases.
24
25 The SQL script will be written to the standard output. Use the
26 [-f|file] option or shell operators to redirect it into a file.
27
28 pg_dumpall needs to connect several times to the PostgreSQL server
29 (once per database). If you use password authentication it will ask for
30 a password each time. It is convenient to have a ~/.pgpass file in such
31 cases. See Section 34.15 for more information.
32
34 The following command-line options control the content and format of
35 the output.
36
37 -a
38 --data-only
39 Dump only the data, not the schema (data definitions).
40
41 -c
42 --clean
43 Include SQL commands to clean (drop) databases before recreating
44 them. DROP commands for roles and tablespaces are added as well.
45
46 -E encoding
47 --encoding=encoding
48 Create the dump in the specified character set encoding. By
49 default, the dump is created in the database encoding. (Another way
50 to get the same result is to set the PGCLIENTENCODING environment
51 variable to the desired dump encoding.)
52
53 -f filename
54 --file=filename
55 Send output to the specified file. If this is omitted, the standard
56 output is used.
57
58 -g
59 --globals-only
60 Dump only global objects (roles and tablespaces), no databases.
61
62 -o
63 --oids
64 Dump object identifiers (OIDs) as part of the data for every table.
65 Use this option if your application references the OID columns in
66 some way (e.g., in a foreign key constraint). Otherwise, this
67 option should not be used.
68
69 -O
70 --no-owner
71 Do not output commands to set ownership of objects to match the
72 original database. By default, pg_dumpall issues ALTER OWNER or SET
73 SESSION AUTHORIZATION statements to set ownership of created schema
74 elements. These statements will fail when the script is run unless
75 it is started by a superuser (or the same user that owns all of the
76 objects in the script). To make a script that can be restored by
77 any user, but will give that user ownership of all the objects,
78 specify -O.
79
80 -r
81 --roles-only
82 Dump only roles, no databases or tablespaces.
83
84 -s
85 --schema-only
86 Dump only the object definitions (schema), not data.
87
88 -S username
89 --superuser=username
90 Specify the superuser user name to use when disabling triggers.
91 This is relevant only if --disable-triggers is used. (Usually, it's
92 better to leave this out, and instead start the resulting script as
93 superuser.)
94
95 -t
96 --tablespaces-only
97 Dump only tablespaces, no databases or roles.
98
99 -v
100 --verbose
101 Specifies verbose mode. This will cause pg_dumpall to output
102 start/stop times to the dump file, and progress messages to
103 standard error. It will also enable verbose output in pg_dump.
104
105 -V
106 --version
107 Print the pg_dumpall version and exit.
108
109 -x
110 --no-privileges
111 --no-acl
112 Prevent dumping of access privileges (grant/revoke commands).
113
114 --binary-upgrade
115 This option is for use by in-place upgrade utilities. Its use for
116 other purposes is not recommended or supported. The behavior of the
117 option may change in future releases without notice.
118
119 --column-inserts
120 --attribute-inserts
121 Dump data as INSERT commands with explicit column names (INSERT
122 INTO table (column, ...) VALUES ...). This will make restoration
123 very slow; it is mainly useful for making dumps that can be loaded
124 into non-PostgreSQL databases.
125
126 --disable-dollar-quoting
127 This option disables the use of dollar quoting for function bodies,
128 and forces them to be quoted using SQL standard string syntax.
129
130 --disable-triggers
131 This option is relevant only when creating a data-only dump. It
132 instructs pg_dumpall to include commands to temporarily disable
133 triggers on the target tables while the data is reloaded. Use this
134 if you have referential integrity checks or other triggers on the
135 tables that you do not want to invoke during data reload.
136
137 Presently, the commands emitted for --disable-triggers must be done
138 as superuser. So, you should also specify a superuser name with -S,
139 or preferably be careful to start the resulting script as a
140 superuser.
141
142 --if-exists
143 Use conditional commands (i.e. add an IF EXISTS clause) to drop
144 databases and other objects. This option is not valid unless
145 --clean is also specified.
146
147 --inserts
148 Dump data as INSERT commands (rather than COPY). This will make
149 restoration very slow; it is mainly useful for making dumps that
150 can be loaded into non-PostgreSQL databases. Note that the restore
151 might fail altogether if you have rearranged column order. The
152 --column-inserts option is safer, though even slower.
153
154 --load-via-partition-root
155 When dumping data for a table partition, make the COPY or INSERT
156 statements target the root of the partitioning hierarchy that
157 contains it, rather than the partition itself. This causes the
158 appropriate partition to be re-determined for each row when the
159 data is loaded. This may be useful when reloading data on a server
160 where rows do not always fall into the same partitions as they did
161 on the original server. That could happen, for example, if the
162 partitioning column is of type text and the two systems have
163 different definitions of the collation used to sort the
164 partitioning column.
165
166 --lock-wait-timeout=timeout
167 Do not wait forever to acquire shared table locks at the beginning
168 of the dump. Instead, fail if unable to lock a table within the
169 specified timeout. The timeout may be specified in any of the
170 formats accepted by SET statement_timeout. Allowed values vary
171 depending on the server version you are dumping from, but an
172 integer number of milliseconds is accepted by all versions since
173 7.3. This option is ignored when dumping from a pre-7.3 server.
174
175 --no-comments
176 Do not dump comments.
177
178 --no-publications
179 Do not dump publications.
180
181 --no-role-passwords
182 Do not dump passwords for roles. When restored, roles will have a
183 null password, and password authentication will always fail until
184 the password is set. Since password values aren't needed when this
185 option is specified, the role information is read from the catalog
186 view pg_roles instead of pg_authid. Therefore, this option also
187 helps if access to pg_authid is restricted by some security policy.
188
189 --no-security-labels
190 Do not dump security labels.
191
192 --no-subscriptions
193 Do not dump subscriptions.
194
195 --no-sync
196 By default, pg_dumpall will wait for all files to be written safely
197 to disk. This option causes pg_dumpall to return without waiting,
198 which is faster, but means that a subsequent operating system crash
199 can leave the dump corrupt. Generally, this option is useful for
200 testing but should not be used when dumping data from production
201 installation.
202
203 --no-tablespaces
204 Do not output commands to create tablespaces nor select tablespaces
205 for objects. With this option, all objects will be created in
206 whichever tablespace is the default during restore.
207
208 --no-unlogged-table-data
209 Do not dump the contents of unlogged tables. This option has no
210 effect on whether or not the table definitions (schema) are dumped;
211 it only suppresses dumping the table data.
212
213 --quote-all-identifiers
214 Force quoting of all identifiers. This option is recommended when
215 dumping a database from a server whose PostgreSQL major version is
216 different from pg_dumpall's, or when the output is intended to be
217 loaded into a server of a different major version. By default,
218 pg_dumpall quotes only identifiers that are reserved words in its
219 own major version. This sometimes results in compatibility issues
220 when dealing with servers of other versions that may have slightly
221 different sets of reserved words. Using --quote-all-identifiers
222 prevents such issues, at the price of a harder-to-read dump script.
223
224 --use-set-session-authorization
225 Output SQL-standard SET SESSION AUTHORIZATION commands instead of
226 ALTER OWNER commands to determine object ownership. This makes the
227 dump more standards compatible, but depending on the history of the
228 objects in the dump, might not restore properly.
229
230 -?
231 --help
232 Show help about pg_dumpall command line arguments, and exit.
233
234 The following command-line options control the database connection
235 parameters.
236
237 -d connstr
238 --dbname=connstr
239 Specifies parameters used to connect to the server, as a connection
240 string. See Section 34.1.1 for more information.
241
242 The option is called --dbname for consistency with other client
243 applications, but because pg_dumpall needs to connect to many
244 databases, the database name in the connection string will be
245 ignored. Use the -l option to specify the name of the database used
246 for the initial connection, which will dump global objects and
247 discover what other databases should be dumped.
248
249 -h host
250 --host=host
251 Specifies the host name of the machine on which the database server
252 is running. If the value begins with a slash, it is used as the
253 directory for the Unix domain socket. The default is taken from the
254 PGHOST environment variable, if set, else a Unix domain socket
255 connection is attempted.
256
257 -l dbname
258 --database=dbname
259 Specifies the name of the database to connect to for dumping global
260 objects and discovering what other databases should be dumped. If
261 not specified, the postgres database will be used, and if that does
262 not exist, template1 will be used.
263
264 -p port
265 --port=port
266 Specifies the TCP port or local Unix domain socket file extension
267 on which the server is listening for connections. Defaults to the
268 PGPORT environment variable, if set, or a compiled-in default.
269
270 -U username
271 --username=username
272 User name to connect as.
273
274 -w
275 --no-password
276 Never issue a password prompt. If the server requires password
277 authentication and a password is not available by other means such
278 as a .pgpass file, the connection attempt will fail. This option
279 can be useful in batch jobs and scripts where no user is present to
280 enter a password.
281
282 -W
283 --password
284 Force pg_dumpall to prompt for a password before connecting to a
285 database.
286
287 This option is never essential, since pg_dumpall will automatically
288 prompt for a password if the server demands password
289 authentication. However, pg_dumpall will waste a connection attempt
290 finding out that the server wants a password. In some cases it is
291 worth typing -W to avoid the extra connection attempt.
292
293 Note that the password prompt will occur again for each database to
294 be dumped. Usually, it's better to set up a ~/.pgpass file than to
295 rely on manual password entry.
296
297 --role=rolename
298 Specifies a role name to be used to create the dump. This option
299 causes pg_dumpall to issue a SET ROLE rolename command after
300 connecting to the database. It is useful when the authenticated
301 user (specified by -U) lacks privileges needed by pg_dumpall, but
302 can switch to a role with the required rights. Some installations
303 have a policy against logging in directly as a superuser, and use
304 of this option allows dumps to be made without violating the
305 policy.
306
308 PGHOST
309 PGOPTIONS
310 PGPORT
311 PGUSER
312 Default connection parameters
313
314 This utility, like most other PostgreSQL utilities, also uses the
315 environment variables supported by libpq (see Section 34.14).
316
318 Since pg_dumpall calls pg_dump internally, some diagnostic messages
319 will refer to pg_dump.
320
321 The --clean option can be useful even when your intention is to restore
322 the dump script into a fresh cluster. Use of --clean authorizes the
323 script to drop and re-create the built-in postgres and template1
324 databases, ensuring that those databases will retain the same
325 properties (for instance, locale and encoding) that they had in the
326 source cluster. Without the option, those databases will retain their
327 existing database-level properties, as well as any pre-existing
328 contents.
329
330 Once restored, it is wise to run ANALYZE on each database so the
331 optimizer has useful statistics. You can also run vacuumdb -a -z to
332 analyze all databases.
333
334 The dump script should not be expected to run completely without
335 errors. In particular, because the script will issue CREATE ROLE for
336 every role existing in the source cluster, it is certain to get a “role
337 already exists” error for the bootstrap superuser, unless the
338 destination cluster was initialized with a different bootstrap
339 superuser name. This error is harmless and should be ignored. Use of
340 the --clean option is likely to produce additional harmless error
341 messages about non-existent objects, although you can minimize those by
342 adding --if-exists.
343
344 pg_dumpall requires all needed tablespace directories to exist before
345 the restore; otherwise, database creation will fail for databases in
346 non-default locations.
347
349 To dump all databases:
350
351 $ pg_dumpall > db.out
352
353 To reload database(s) from this file, you can use:
354
355 $ psql -f db.out postgres
356
357 It is not important to which database you connect here since the script
358 file created by pg_dumpall will contain the appropriate commands to
359 create and connect to the saved databases. An exception is that if you
360 specified --clean, you must connect to the postgres database initially;
361 the script will attempt to drop other databases immediately, and that
362 will fail for the database you are connected to.
363
365 Check pg_dump(1) for details on possible error conditions.
366
367
368
369PostgreSQL 11.3 2019 PG_DUMPALL(1)