1BARMAN-CLOUD-BACKUP(1)           Version 3.0.1          BARMAN-CLOUD-BACKUP(1)
2
3
4

NAME

6       barman-cloud-backup - Backup a PostgreSQL instance and stores it in the
7       Cloud
8

SYNOPSIS

10       barman-cloud-backup [OPTIONS] DESTINATION_URL SERVER_NAME
11

DESCRIPTION

13       This script can be used to perform a backup of a local  PostgreSQL  in‐
14       stance  and  ship the resulting tarball(s) to the Cloud.  Currently AWS
15       S3, Azure Blob Storage and Google Cloud Storage are supported.
16
17       It requires read access to PGDATA  and  tablespaces  (normally  run  as
18       postgres user).  It can also be used as a hook script on a barman serv‐
19       er, in which case it requires read access to the directory where barman
20       backups are stored.
21
22       This  script  and Barman are administration tools for disaster recovery
23       of PostgreSQL servers written in Python and maintained by EnterpriseDB.
24
25       IMPORTANT: the Cloud upload process may fail if any file  with  a  size
26       greater than the configured --max-archive-size is present either in the
27       data directory or in  any  tablespaces.   However,  PostgreSQL  creates
28       files  with a maximum size of 1GB, and that size is always allowed, re‐
29       gardless of the max-archive-size parameter.
30

POSITIONAL ARGUMENTS

32       DESTINATION_URL
33              URL of the cloud destination, such as a bucket in AWS  S3.   For
34              example:  s3://BUCKET_NAME/path/to/folder  (where BUCKET_NAME is
35              the bucket you have created in AWS).
36
37       SERVER_NAME
38              the name of the server as configured in Barman.
39

OPTIONS

41       -h, –help
42              show a help message and exit
43
44       -V, –version
45              show program's version number and exit
46
47       -v, –verbose
48              increase output verbosity (e.g., -vv is more than -v)
49
50       -q, –quiet
51              decrease output verbosity (e.g., -qq is less than -q)
52
53       -t, –test
54              test connectivity to the cloud destination and exit
55
56       -z, –gzip
57              gzip-compress the tar files when uploading to the cloud
58
59       -j, –bzip2
60              bzip2-compress the tar files when uploading to the cloud
61
62       –snappy
63              snappy-compress the tar files when uploading to the  cloud  (re‐
64              quires optional python-snappy library)
65
66       -d, –dbname
67              database  name  or  conninfo string for Postgres connection (de‐
68              fault: postgres)
69
70       -h, –host
71              host or Unix socket for PostgreSQL  connection  (default:  libpq
72              settings)
73
74       -p, –port
75              port for PostgreSQL connection (default: libpq settings)
76
77       -U, –user
78              user name for PostgreSQL connection (default: libpq settings)
79
80       –immediate-checkpoint
81              forces the initial checkpoint to be done as quickly as possible
82
83       -J JOBS, –jobs JOBS
84              number of subprocesses to upload data to cloud storage (default:
85              2)
86
87       -S MAX_ARCHIVE_SIZE, –max-archive-size MAX_ARCHIVE_SIZE
88              maximum size of an archive when uploading to cloud storage  (de‐
89              fault: 100GB)
90
91       –cloud-provider {aws-s3,azure-blob-storage,google-cloud-storage}
92              the cloud provider to which the backup should be uploaded
93
94       –tags KEY1,VALUE1 KEY2,VALUE2 ...
95              a space-separated list of comma-separated key-value pairs repre‐
96              senting tags to be added to each object created in cloud storage
97
98       -P, –profile
99              profile name (e.g. INI section in AWS credentials file)
100
101       –endpoint-url
102              override the default S3 URL construction mechanism by specifying
103              an endpoint
104
105       -e, –encryption
106              the  encryption algorithm used when storing the uploaded data in
107              S3 Allowed values: `AES256'|`aws:kms'
108
109       –encryption-scope
110              the name of an encryption scope defined in the Azure Blob  Stor‐
111              age service which is to be used to encrypt the data in Azure
112
113       –credential {azure-cli,managed-identity}
114              optionally  specify the type of credential to use when authenti‐
115              cating with Azure Blob Storage.  If omitted then the  credential
116              will be obtained from the environment.  If no credentials can be
117              found in the environment then the default  Azure  authentication
118              flow will be used.
119

REFERENCES

121       For Boto:
122
123https://boto3.amazonaws.com/v1/documentation/api/latest/guide/config
124         uration.html
125
126       For AWS:
127
128https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-get
129         ting-set-up.html
130
131https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-get
132         ting-started.html.
133
134       For Azure Blob Storage:
135
136https://docs.microsoft.com/en-us/azure/storage/blobs/authorize-da
137         ta-operations-cli#set-environment-variables-for-authorization-parame‐
138         ters
139
140https://docs.microsoft.com/en-us/python/api/azure-stor
141         age-blob/?view=azure-python
142
143       For libpq settings information:
144
145https://www.postgresql.org/docs/current/libpq-envars.html
146
147       For        Google       Cloud       Storage:       *       Credentials:
148       https://cloud.google.com/docs/authentication/getting-started#set‐
149       ting_the_environment_variable
150
151       Only authentication with GOOGLE_APPLICATION_CREDENTIALS env is support‐
152       ed at the moment.
153

DEPENDENCIES

155       If using --cloud-provider=aws-s3:
156
157       • boto3
158
159       If using --cloud-provider=azure-blob-storage:
160
161       • azure-storage-blob
162
163       • azure-identity (optional, if you wish to use DefaultAzureCredential)
164
165       If using --cloud-provider=google-cloud-storage * google-cloud-storage
166

EXIT STATUS

168       0      Success
169
170       1      The backup was not successful
171
172       2      The connection to the cloud provider failed
173
174       3      There was an error in the command input
175
176       Other non-zero codes
177              Failure
178

SEE ALSO

180       This script can be  used  in  conjunction  with  post_backup_script  or
181       post_backup_retry_script  to  relay  barman backups to cloud storage as
182       follows:
183
184              post_backup_retry_script = 'barman-cloud-backup [*OPTIONS*] *DESTINATION_URL* ${BARMAN_SERVER}'
185
186       When running as a hook script, barman-cloud-backup will read the  loca‐
187       tion  of  the  backup  directory  and the backup ID from BACKUP_DIR and
188       BACKUP_ID environment variables set by barman.
189

BUGS

191       Barman has been extensively tested, and is currently being used in sev‐
192       eral  production environments.  However, we cannot exclude the presence
193       of bugs.
194
195       Any bug can be reported via the Github issue tracker.
196

RESOURCES

198       • Homepage: <https://www.pgbarman.org/>
199
200       • Documentation: <https://docs.pgbarman.org/>
201
202       • Professional support: <https://www.enterprisedb.com/>
203

COPYING

205       Barman is the property of EnterpriseDB UK Limited and its code is  dis‐
206       tributed under GNU General Public License v3.
207
208       © Copyright EnterpriseDB UK Limited 2011-2022
209

AUTHORS

211       EnterpriseDB <https://www.enterprisedb.com>.
212
213
214
215Barman User manuals              June 27, 2022          BARMAN-CLOUD-BACKUP(1)
Impressum