1BARMAN-CLOUD-WAL-ARCHIVE(1)      Version 3.0.1     BARMAN-CLOUD-WAL-ARCHIVE(1)
2
3
4

NAME

6       barman-cloud-wal-archive  -  Archive  PostgreSQL WAL files in the Cloud
7       using archive_command
8

SYNOPSIS

10       barman-cloud-wal-archive [OPTIONS] DESTINATION_URL SERVER_NAME WAL_PATH
11

DESCRIPTION

13       This script can be used in the archive_command of a  PostgreSQL  server
14       to  ship  WAL files to the Cloud.  Currently AWS S3, Azure Blob Storage
15       and Google Cloud Storage are supported.
16
17       Note: If you are running python 2  or  older  unsupported  versions  of
18       python  3  then avoid the compression options --gzip or --bzip2 as bar‐
19       man-cloud-wal-restore is unable  to  restore  gzip-compressed  WALs  on
20       python < 3.2 or bzip2-compressed WALs on python < 3.3.
21
22       This  script  and Barman are administration tools for disaster recovery
23       of PostgreSQL servers written in Python and maintained by EnterpriseDB.
24

POSITIONAL ARGUMENTS

26       DESTINATION_URL
27              URL of the cloud destination, such as a bucket in AWS  S3.   For
28              example:  s3://BUCKET_NAME/path/to/folder  (where BUCKET_NAME is
29              the bucket you have created in AWS).
30
31       SERVER_NAME
32              the name of the server as configured in Barman.
33
34       WAL_PATH
35              the value of the `%p' keyword (according to `archive_command').
36

OPTIONS

38       -h, –help
39              show a help message and exit
40
41       -V, –version
42              show program's version number and exit
43
44       -v, –verbose
45              increase output verbosity (e.g., -vv is more than -v)
46
47       -q, –quiet
48              decrease output verbosity (e.g., -qq is less than -q)
49
50       -t, –test
51              test connectivity to the cloud destination and exit
52
53       -z, –gzip
54              gzip-compress the WAL while uploading to the cloud  (should  not
55              be used with python < 3.2)
56
57       -j, –bzip2
58              bzip2-compress  the WAL while uploading to the cloud (should not
59              be used with python < 3.3)
60
61       –snappy
62              snappy-compress the WAL while uploading to the  cloud  (requires
63              optional  python-snappy  library  and  should  not  be used with
64              python < 3.3)
65
66       –cloud-provider {aws-s3,azure-blob-storage,google-cloud-storage}
67              the cloud provider to which the backup should be uploaded
68
69       –tags KEY1,VALUE1 KEY2,VALUE2 ...
70              A space-separated list of comma-separated key-value pairs repre‐
71              senting  tags  to  be  added  to each WAL file archived to cloud
72              storage.
73
74       –history-tags KEY1,VALUE1 KEY2,VALUE2 ...
75              A space-separated list of comma-separated key-value pairs repre‐
76              senting  tags to be added to each history file archived to cloud
77              storage.  If this is provided alongside the --tags  option  then
78              the  value of --history-tags will be used in place of --tags for
79              history files.  All other WAL files will continue to  be  tagged
80              with the value of --tags.
81
82       -P, –profile
83              profile name (e.g. INI section in AWS credentials file)
84
85       –endpoint-url
86              override the default S3 URL construction mechanism by specifying
87              an endpoint.
88
89       -e, –encryption
90              the encryption algorithm used when storing the uploaded data  in
91              S3 Allowed values: `AES256'|`aws:kms'
92
93       –encryption-scope
94              the  name of an encryption scope defined in the Azure Blob Stor‐
95              age service which is to be used to encrypt the data in Azure
96
97       –credential {azure-cli,managed-identity}
98              optionally specify the type of credential to use when  authenti‐
99              cating  with Azure Blob Storage.  If omitted then the credential
100              will be obtained from the environment.  If no credentials can be
101              found  in  the environment then the default Azure authentication
102              flow will be used.
103
104       –max-block-size SIZE
105              the chunk size to be used when uploading an object to Azure Blob
106              Storage via the concurrent chunk method (default: 4MB).
107
108       –max-concurrency CONCURRENCY
109              the  maximum  number  of  chunks  to be uploaded concurrently to
110              Azure Blob Storage (default: 1).  Whether the maximum concurren‐
111              cy  is achieved depends on the values of –max-block-size (should
112              be  less  than  or  equal   to   WAL segment size after compres‐
113              sion / max_concurrency)  and  –max-single-put-size (must be less
114              than WAL segment size after compression).
115
116       –max-single-put-size SIZE
117              maximum size for which the Azure client will upload an object to
118              Azure Blob Storage in a single request (default: 64MB).  If this
119              is set lower than the WAL segment size after  any  applied  com‐
120              pression then the concurrent chunk upload method for WAL archiv‐
121              ing will be used.
122

REFERENCES

124       For Boto:
125
126https://boto3.amazonaws.com/v1/documentation/api/latest/guide/config
127         uration.html
128
129       For AWS:
130
131https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-get
132         ting-set-up.html
133
134https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-get
135         ting-started.html.
136
137       For Azure Blob Storage:
138
139https://docs.microsoft.com/en-us/azure/storage/blobs/authorize-da
140         ta-operations-cli#set-environment-variables-for-authorization-parame‐
141         ters
142
143https://docs.microsoft.com/en-us/python/api/azure-stor
144         age-blob/?view=azure-python
145
146       For       Google       Cloud       Storage:       *        Credentials:
147       https://cloud.google.com/docs/authentication/getting-started#set‐
148       ting_the_environment_variable
149
150       Only authentication with GOOGLE_APPLICATION_CREDENTIALS env is support‐
151       ed at the moment.
152

DEPENDENCIES

154       If using --cloud-provider=aws-s3:
155
156       • boto3
157
158       If using --cloud-provider=azure-blob-storage:
159
160       • azure-storage-blob
161
162       • azure-identity (optional, if you wish to use DefaultAzureCredential)
163
164       If using --cloud-provider=google-cloud-storage * google-cloud-storage
165

EXIT STATUS

167       0      Success
168
169       1      The WAL archive operation was not successful
170
171       2      The connection to the cloud provider failed
172
173       3      There was an error in the command input
174
175       Other non-zero codes
176              Failure
177

SEE ALSO

179       This script can be used in conjunction with pre_archive_retry_script to
180       relay WAL files to S3, as follows:
181
182              pre_archive_retry_script = 'barman-cloud-wal-archive [*OPTIONS*] *DESTINATION_URL* ${BARMAN_SERVER}'
183

BUGS

185       Barman has been extensively tested, and is currently being used in sev‐
186       eral  production environments.  However, we cannot exclude the presence
187       of bugs.
188
189       Any bug can be reported via the Github issue tracker.
190

RESOURCES

192       • Homepage: <https://www.pgbarman.org/>
193
194       • Documentation: <https://docs.pgbarman.org/>
195
196       • Professional support: <https://www.enterprisedb.com/>
197

COPYING

199       Barman is the property of EnterpriseDB UK Limited and its code is  dis‐
200       tributed under GNU General Public License v3.
201
202       © Copyright EnterpriseDB UK Limited 2011-2022
203

AUTHORS

205       EnterpriseDB <https://www.enterprisedb.com>.
206
207
208
209Barman User manuals              June 27, 2022     BARMAN-CLOUD-WAL-ARCHIVE(1)
Impressum