1BARMAN-CLOUD-BACKUP(1) Version 3.9.0 BARMAN-CLOUD-BACKUP(1)
2
3
4
6 barman-cloud-backup - Backup a PostgreSQL instance and stores it in the
7 Cloud
8
10 barman-cloud-backup [OPTIONS] DESTINATION_URL SERVER_NAME
11
13 This script can be used to perform a backup of a local PostgreSQL in‐
14 stance and ship the resulting tarball(s) to the Cloud. Currently AWS
15 S3, Azure Blob Storage and Google Cloud Storage are supported.
16
17 It requires read access to PGDATA and tablespaces (normally run as
18 postgres user). It can also be used as a hook script on a barman serv‐
19 er, in which case it requires read access to the directory where barman
20 backups are stored.
21
22 If the arguments prefixed with --snapshot- are used, and snapshots are
23 supported for the selected cloud provider, then the backup will be per‐
24 formed using snapshots of the disks specified using --snapshot-disk ar‐
25 guments. The backup label and backup metadata will be uploaded to the
26 cloud object store.
27
28 This script and Barman are administration tools for disaster recovery
29 of PostgreSQL servers written in Python and maintained by EnterpriseDB.
30
31 IMPORTANT: the Cloud upload process may fail if any file with a size
32 greater than the configured --max-archive-size is present either in the
33 data directory or in any tablespaces. However, PostgreSQL creates
34 files with a maximum size of 1GB, and that size is always allowed, re‐
35 gardless of the max-archive-size parameter.
36
38 usage: barman-cloud-backup [-V] [--help] [-v | -q] [-t]
39 [--cloud-provider {aws-s3,azure-blob-storage,google-cloud-storage}]
40 [--endpoint-url ENDPOINT_URL] [-P AWS_PROFILE]
41 [--profile AWS_PROFILE]
42 [--read-timeout READ_TIMEOUT]
43 [--azure-credential {azure-cli,managed-identity}]
44 [-z | -j | --snappy] [-h HOST] [-p PORT] [-U USER]
45 [--immediate-checkpoint] [-J JOBS]
46 [-S MAX_ARCHIVE_SIZE] [-d DBNAME] [-n BACKUP_NAME]
47 [--snapshot-instance SNAPSHOT_INSTANCE]
48 [--snapshot-disk NAME] [--snapshot-zone GCP_ZONE]
49 [--snapshot-gcp-project GCP_PROJECT]
50 [--gcp-project GCP_PROJECT]
51 [--kms-key-name KMS_KEY_NAME] [--gcp-zone GCP_ZONE]
52 [--tags [TAGS [TAGS ...]]] [-e {AES256,aws:kms}]
53 [--sse-kms-key-id SSE_KMS_KEY_ID]
54 [--aws-region AWS_REGION]
55 [--encryption-scope ENCRYPTION_SCOPE]
56 [--azure-subscription-id AZURE_SUBSCRIPTION_ID]
57 [--azure-resource-group AZURE_RESOURCE_GROUP]
58 destination_url server_name
59
60 This script can be used to perform a backup of a local PostgreSQL instance and
61 ship the resulting tarball(s) to the Cloud. Currently AWS S3, Azure Blob
62 Storage and Google Cloud Storage are supported.
63
64 positional arguments:
65 destination_url URL of the cloud destination, such as a bucket in AWS
66 S3. For example: `s3://bucket/path/to/folder`.
67 server_name the name of the server as configured in Barman.
68
69 optional arguments:
70 -V, --version show program's version number and exit
71 --help show this help message and exit
72 -v, --verbose increase output verbosity (e.g., -vv is more than -v)
73 -q, --quiet decrease output verbosity (e.g., -qq is less than -q)
74 -t, --test Test cloud connectivity and exit
75 --cloud-provider {aws-s3,azure-blob-storage,google-cloud-storage}
76 The cloud provider to use as a storage backend
77 -z, --gzip gzip-compress the backup while uploading to the cloud
78 -j, --bzip2 bzip2-compress the backup while uploading to the cloud
79 --snappy snappy-compress the backup while uploading to the cloud
80 -h HOST, --host HOST host or Unix socket for PostgreSQL connection
81 (default: libpq settings)
82 -p PORT, --port PORT port for PostgreSQL connection (default: libpq
83 settings)
84 -U USER, --user USER user name for PostgreSQL connection (default: libpq
85 settings)
86 --immediate-checkpoint
87 forces the initial checkpoint to be done as quickly as
88 possible
89 -J JOBS, --jobs JOBS number of subprocesses to upload data to cloud storage
90 (default: 2)
91 -S MAX_ARCHIVE_SIZE, --max-archive-size MAX_ARCHIVE_SIZE
92 maximum size of an archive when uploading to cloud
93 storage (default: 100GB)
94 -d DBNAME, --dbname DBNAME
95 Database name or conninfo string for Postgres
96 connection (default: postgres)
97 -n BACKUP_NAME, --name BACKUP_NAME
98 a name which can be used to reference this backup in
99 commands such as barman-cloud-restore and barman-
100 cloud-backup-delete
101 --snapshot-instance SNAPSHOT_INSTANCE
102 Instance where the disks to be backed up as snapshots
103 are attached
104 --snapshot-disk NAME Name of a disk from which snapshots should be taken
105 --snapshot-zone GCP_ZONE
106 Zone of the disks from which snapshots should be taken
107 (deprecated: replaced by --gcp-zone)
108 --tags [TAGS [TAGS ...]]
109 Tags to be added to all uploaded files in cloud
110 storage
111
112 Extra options for the aws-s3 cloud provider:
113 --endpoint-url ENDPOINT_URL
114 Override default S3 endpoint URL with the given one
115 -P AWS_PROFILE, --aws-profile AWS_PROFILE
116 profile name (e.g. INI section in AWS credentials
117 file)
118 --profile AWS_PROFILE
119 profile name (deprecated: replaced by --aws-profile)
120 --read-timeout READ_TIMEOUT
121 the time in seconds until a timeout is raised when
122 waiting to read from a connection (defaults to 60
123 seconds)
124 -e {AES256,aws:kms}, --encryption {AES256,aws:kms}
125 The encryption algorithm used when storing the
126 uploaded data in S3. Allowed values:
127 'AES256'|'aws:kms'.
128 --sse-kms-key-id SSE_KMS_KEY_ID
129 The AWS KMS key ID that should be used for encrypting
130 the uploaded data in S3. Can be specified using the
131 key ID on its own or using the full ARN for the key.
132 Only allowed if `-e/--encryption` is set to `aws:kms`.
133 --aws-region AWS_REGION
134 The name of the AWS region containing the EC2 VM and
135 storage volumes defined by the --snapshot-instance and
136 --snapshot-disk arguments.
137
138 Extra options for the azure-blob-storage cloud provider:
139 --azure-credential {azure-cli,managed-identity}, --credential {azure-cli,managed-identity}
140 Optionally specify the type of credential to use when
141 authenticating with Azure. If omitted then Azure Blob
142 Storage credentials will be obtained from the
143 environment and the default Azure authentication flow
144 will be used for authenticating with all other Azure
145 services. If no credentials can be found in the
146 environment then the default Azure authentication flow
147 will also be used for Azure Blob Storage.
148 --encryption-scope ENCRYPTION_SCOPE
149 The name of an encryption scope defined in the Azure
150 Blob Storage service which is to be used to encrypt
151 the data in Azure
152 --azure-subscription-id AZURE_SUBSCRIPTION_ID
153 The ID of the Azure subscription which owns the
154 instance and storage volumes defined by the
155 --snapshot-instance and --snapshot-disk arguments.
156 --azure-resource-group AZURE_RESOURCE_GROUP
157 The name of the Azure resource group to which the
158 compute instance and disks defined by the --snapshot-
159 instance and --snapshot-disk arguments belong.
160
161 Extra options for google-cloud-storage cloud provider:
162 --snapshot-gcp-project GCP_PROJECT
163 GCP project under which disk snapshots should be
164 stored (deprecated: replaced by --gcp-project)
165 --gcp-project GCP_PROJECT
166 GCP project under which disk snapshots should be
167 stored
168 --kms-key-name KMS_KEY_NAME
169 The name of the GCP KMS key which should be used for
170 encrypting the uploaded data in GCS.
171 --gcp-zone GCP_ZONE Zone of the disks from which snapshots should be taken
172
174 For Boto:
175
176 • https://boto3.amazonaws.com/v1/documentation/api/latest/guide/config‐
177 uration.html
178
179 For AWS:
180
181 • https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-get‐
182 ting-set-up.html
183
184 • https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-get‐
185 ting-started.html.
186
187 For Azure Blob Storage:
188
189 • https://docs.microsoft.com/en-us/azure/storage/blobs/authorize-da‐
190 ta-operations-cli#set-environment-variables-for-authorization-parame‐
191 ters
192
193 • https://docs.microsoft.com/en-us/python/api/azure-stor‐
194 age-blob/?view=azure-python
195
196 For libpq settings information:
197
198 • https://www.postgresql.org/docs/current/libpq-envars.html
199
200 For Google Cloud Storage: * Credentials:
201 https://cloud.google.com/docs/authentication/getting-started#set‐
202 ting_the_environment_variable
203
204 Only authentication with GOOGLE_APPLICATION_CREDENTIALS env is support‐
205 ed at the moment.
206
208 If using --cloud-provider=aws-s3:
209
210 • boto3
211
212 If using --cloud-provider=azure-blob-storage:
213
214 • azure-storage-blob
215
216 • azure-identity (optional, if you wish to use DefaultAzureCredential)
217
218 If using --cloud-provider=google-cloud-storage * google-cloud-storage
219
220 If using --cloud-provider=google-cloud-storage with snapshot backups
221
222 • grpcio
223
224 • google-cloud-compute
225
227 0 Success
228
229 1 The backup was not successful
230
231 2 The connection to the cloud provider failed
232
233 3 There was an error in the command input
234
235 Other non-zero codes
236 Failure
237
239 This script can be used in conjunction with post_backup_script or
240 post_backup_retry_script to relay barman backups to cloud storage as
241 follows:
242
243 post_backup_retry_script = 'barman-cloud-backup [*OPTIONS*] *DESTINATION_URL* ${BARMAN_SERVER}'
244
245 When running as a hook script, barman-cloud-backup will read the loca‐
246 tion of the backup directory and the backup ID from BACKUP_DIR and
247 BACKUP_ID environment variables set by barman.
248
250 Barman has been extensively tested, and is currently being used in sev‐
251 eral production environments. However, we cannot exclude the presence
252 of bugs.
253
254 Any bug can be reported via the GitHub issue tracker.
255
257 • Homepage: <https://www.pgbarman.org/>
258
259 • Documentation: <https://docs.pgbarman.org/>
260
261 • Professional support: <https://www.enterprisedb.com/>
262
264 Barman is the property of EnterpriseDB UK Limited and its code is dis‐
265 tributed under GNU General Public License v3.
266
267 © Copyright EnterpriseDB UK Limited 2011-2023
268
270 EnterpriseDB <https://www.enterprisedb.com>.
271
272
273
274Barman User manuals October 3, 2023 BARMAN-CLOUD-BACKUP(1)