1DUPLICITY(1)                     User Manuals                     DUPLICITY(1)
2
3
4

NAME

6       duplicity - Encrypted incremental backup to local or remote storage.
7
8

SYNOPSIS

10       For detailed descriptions for each command see chapter ACTIONS.
11
12       duplicity [full|incremental] [options] source_directory target_url
13
14       duplicity verify [options] [--compare-data] [--file-to-restore
15       <relpath>] [--time time] source_url target_directory
16
17       duplicity collection-status [options] [--file-changed <relpath>]
18       [--show-changes-in-set <index>] target_url
19
20       duplicity list-current-files [options] [--time time] target_url
21
22       duplicity [restore] [options] [--file-to-restore <relpath>] [--time
23       time] source_url target_directory
24
25       duplicity remove-older-than <time> [options] [--force] target_url
26
27       duplicity remove-all-but-n-full <count> [options] [--force] target_url
28
29       duplicity remove-all-inc-of-but-n-full <count> [options] [--force]
30       target_url
31
32       duplicity cleanup [options] [--force] target_url
33
34       duplicity replicate [options] [--time time] source_url target_url
35
36

DESCRIPTION

38       Duplicity incrementally backs up files and folders into tar-format
39       volumes encrypted with GnuPG and places them to a remote (or local)
40       storage backend.  See chapter URL FORMAT for a list of all supported
41       backends and how to address them.  Because duplicity uses librsync,
42       incremental backups are space efficient and only record the parts of
43       files that have changed since the last backup.  Currently duplicity
44       supports deleted files, full Unix permissions, uid/gid, directories,
45       symbolic links, fifos, etc., but not hard links.
46
47       If you are backing up the root directory /, remember to --exclude
48       /proc, or else duplicity will probably crash on the weird stuff in
49       there.
50
51

EXAMPLES

53       Here is an example of a backup, using sftp to back up /home/me to
54       some_dir on the other.host machine:
55
56              duplicity /home/me sftp://uid@other.host/some_dir
57
58       If the above is run repeatedly, the first will be a full backup, and
59       subsequent ones will be incremental. To force a full backup, use the
60       full action:
61
62              duplicity full /home/me sftp://uid@other.host/some_dir
63
64       or enforcing a full every other time via --full-if-older-than <time> ,
65       e.g. a full every month:
66
67              duplicity --full-if-older-than 1M /home/me
68              sftp://uid@other.host/some_dir
69
70       Now suppose we accidentally delete /home/me and want to restore it the
71       way it was at the time of last backup:
72
73              duplicity sftp://uid@other.host/some_dir /home/me
74
75       Duplicity enters restore mode because the URL comes before the local
76       directory.  If we wanted to restore just the file "Mail/article" in
77       /home/me as it was three days ago into /home/me/restored_file:
78
79              duplicity -t 3D --file-to-restore Mail/article
80              sftp://uid@other.host/some_dir /home/me/restored_file
81
82       The following command compares the latest backup with the current
83       files:
84
85              duplicity verify sftp://uid@other.host/some_dir /home/me
86
87       Finally, duplicity recognizes several include/exclude options.  For
88       instance, the following will backup the root directory, but exclude
89       /mnt, /tmp, and /proc:
90
91              duplicity --exclude /mnt --exclude /tmp --exclude /proc /
92              file:///usr/local/backup
93
94       Note that in this case the destination is the local directory
95       /usr/local/backup.  The following will backup only the /home and /etc
96       directories under root:
97
98              duplicity --include /home --include /etc --exclude '**' /
99              file:///usr/local/backup
100
101       Duplicity can also access a repository via ftp.  If a user name is
102       given, the environment variable FTP_PASSWORD is read to determine the
103       password:
104
105              FTP_PASSWORD=mypassword duplicity /local/dir
106              ftp://user@other.host/some_dir
107
108

ACTIONS

110       Duplicity knows action commands, which can be finetuned with options.
111       The actions for backup (full,incr) and restoration (restore) can as
112       well be left out as duplicity detects in what mode it should switch to
113       by the order of target URL and local folder. If the target URL comes
114       before the local folder a restore is in order, is the local folder
115       before target URL then this folder is about to be backed up to the
116       target URL.
117       If a backup is in order and old signatures can be found duplicity
118       automatically performs an incremental backup.
119
120       NOTE: The following explanations explain some but not all options that
121       can be used in connection with that action command.  Consult the
122       OPTIONS section for more detailed information.
123
124
125       full <folder> <url>
126              Perform a full backup. A new backup chain is started even if
127              signatures are available for an incremental backup.
128
129
130       incr <folder> <url>
131              If this is requested an incremental backup will be performed.
132              Duplicity will abort if no old signatures can be found.
133
134
135       verify [--compare-data] [--time <time>] [--file-to-restore <rel_path>]
136       <url> <local_path>
137              Verify tests the integrity of the backup archives at the remote
138              location by downloading each file and checking both that it can
139              restore the archive and that the restored file matches the
140              signature of that file stored in the backup, i.e. compares the
141              archived file with its hash value from archival time. Verify
142              does not actually restore and will not overwrite any local
143              files. Duplicity will exit with a non-zero error level if any
144              files do not match the signature stored in the archive for that
145              file. On verbosity level 4 or higher, it will log a message for
146              each file that differs from the stored signature. Files must be
147              downloaded to the local machine in order to compare them.
148              Verify does not compare the backed-up version of the file to the
149              current local copy of the files unless the --compare-data option
150              is used (see below).
151              The --file-to-restore option restricts verify to that file or
152              folder.  The --time option allows one to select a backup to
153              verify.  The --compare-data option enables data comparison (see
154              below).
155
156
157       collection-status [--file-changed <relpath>] [--show-changes-in-set
158       <index>] <url>
159              Summarize the status of the backup repository by printing the
160              chains and sets found, and the number of volumes in each.
161              The --file-changed option summarizes the changes to the file (in
162              the most recent backup chain).  The --show-changes-in-set option
163              summarizes all the file changes in the index:th backup set
164              (where index 0 means the latest set, 1 means the next to latest,
165              etc.).
166
167
168       list-current-files [--time <time>] <url>
169              Lists the files contained in the most current backup or backup
170              at time.  The information will be extracted from the signature
171              files, not the archive data itself. Thus the whole archive does
172              not have to be downloaded, but on the other hand if the archive
173              has been deleted or corrupted, this command will not detect it.
174
175
176       restore [--file-to-restore <relpath>] [--time <time>] <url>
177       <target_folder>
178              You can restore the full monty or selected folders/files from a
179              specific time.  Use the relative path as it is printed by list-
180              current-files.  Usually not needed as duplicity enters restore
181              mode when it detects that the URL comes before the local folder.
182
183
184       remove-older-than <time> [--force] <url>
185              Delete all backup sets older than the given time.  Old backup
186              sets will not be deleted if backup sets newer than time depend
187              on them.  See the TIME FORMATS section for more information.
188              Note, this action cannot be combined with backup or other
189              actions, such as cleanup.  Note also that --force will be needed
190              to delete the files instead of just listing them.
191
192
193       remove-all-but-n-full <count> [--force] <url>
194              Delete all backups sets that are older than the count:th last
195              full backup (in other words, keep the last count full backups
196              and associated incremental sets).  count must be larger than
197              zero. A value of 1 means that only the single most recent backup
198              chain will be kept.  Note that --force will be needed to delete
199              the files instead of just listing them.
200
201
202       remove-all-inc-of-but-n-full <count> [--force] <url>
203              Delete incremental sets of all backups sets that are older than
204              the count:th last full backup (in other words, keep only old
205              full backups and not their increments).  count must be larger
206              than zero. A value of 1 means that only the single most recent
207              backup chain will be kept intact.  Note that --force will be
208              needed to delete the files instead of just listing them.
209
210
211       cleanup [--force] <url>
212              Delete the extraneous duplicity files on the given backend.
213              Non-duplicity files, or files in complete data sets will not be
214              deleted.  This should only be necessary after a duplicity
215              session fails or is aborted prematurely.  Note that --force will
216              be needed to delete the files instead of just listing them.
217
218
219       replicate [--time time] <source_url> <target_url>
220              Replicate backup sets from source to target backend. Files will
221              be (re)-encrypted and (re)-compressed depending on normal
222              backend options. Signatures and volumes will not get recomputed,
223              thus options like --volsize or --max-blocksize have no effect.
224              When --time time is given, only backup sets older than time will
225              be replicated.
226
227

OPTIONS

229       --allow-source-mismatch
230              Do not abort on attempts to use the same archive dir or remote
231              backend to back up different directories. duplicity will tell
232              you if you need this switch.
233
234
235       --archive-dir path
236              The archive directory.
237
238              NOTE: This option changed in 0.6.0.  The archive directory is
239              now necessary in order to manage persistence for current and
240              future enhancements.  As such, this option is now used only to
241              change the location of the archive directory.  The archive
242              directory should not be deleted, or duplicity will have to
243              recreate it from the remote repository (which may require
244              decrypting the backup contents).
245
246              When backing up or restoring, this option specifies that the
247              local archive directory is to be created in path.  If the
248              archive directory is not specified, the default will be to
249              create the archive directory in ~/.cache/duplicity/.
250
251              The archive directory can be shared between backups to multiple
252              targets, because a subdirectory of the archive dir is used for
253              individual backups (see --name ).
254
255              The combination of archive directory and backup name must be
256              unique in order to separate the data of different backups.
257
258              The interaction between the --archive-dir and the --name options
259              allows for four possible combinations for the location of the
260              archive dir:
261
262
263                     1.     neither specified (default)
264                             ~/.cache/duplicity/hash-of-url
265
266                     2.     --archive-dir=/arch, no --name
267                             /arch/hash-of-url
268
269                     3.     no --archive-dir, --name=foo
270                             ~/.cache/duplicity/foo
271
272                     4.     --archive-dir=/arch, --name=foo
273                             /arch/foo
274
275
276       --asynchronous-upload
277              (EXPERIMENTAL) Perform file uploads asynchronously in the
278              background, with respect to volume creation. This means that
279              duplicity can upload a volume while, at the same time, preparing
280              the next volume for upload. The intended end-result is a faster
281              backup, because the local CPU and your bandwidth can be more
282              consistently utilized. Use of this option implies additional
283              need for disk space in the temporary storage location; rather
284              than needing to store only one volume at a time, enough storage
285              space is required to store two volumes.
286
287
288       --azure-blob-tier
289              Standard storage tier used for backup files (Hot|Cool|Archive).
290
291
292       --azure-max-single-put-size
293              Specify the number of the largest supported upload size where
294              the Azure library makes only one put call. If the content size
295              is known and below this value the Azure library will only
296              perform one put request to upload one block.  The number is
297              expected to be in bytes.
298
299
300       --azure-max-block-size
301              Specify the number for the block size used by the Azure library
302              to upload blobs if it is split into multiple blocks.  The
303              maximum block size the service supports is 104857600 (100MiB)
304              and the default is 4194304 (4MiB)
305
306
307       --azure-max-connections
308              Specify the number of maximum connections to transfer one blob
309              to Azure blob size exceeds 64MB. The default values is 2.
310
311
312       --backend-retry-delay number
313              Specifies the number of seconds that duplicity waits after an
314              error has occurred before attempting to repeat the operation.
315
316
317       --cf-backend backend
318              Allows the explicit selection of a cloudfiles backend. Defaults
319              to pyrax.  Alternatively you might choose cloudfiles.
320
321
322       --b2-hide-files
323              Causes Duplicity to hide files in B2 instead of deleting them.
324              Useful in combination with B2's lifecycle rules.
325
326
327       --no-check-remote
328              Turn off validation of the remote manifest.  Checking is the
329              default.  No checking will allow you to backup without the
330              private key, but will mean that the remote manifest may exist
331              and be corrupted, leading to the possibility that the backup
332              might not be recoverable.
333
334
335       --compare-data
336              Enable data comparison of regular files on action verify. This
337              conducts a verify as described above to verify the integrity of
338              the backup archives, but additionally compares restored files to
339              those in target_directory.  Duplicity will not replace any files
340              in target_directory. Duplicity will exit with a non-zero error
341              level if the files do not correctly verify or if any files from
342              the archive differ from those in target_directory. On verbosity
343              level 4 or higher, it will log a message for each file that
344              differs from its equivalent in target_directory.
345
346
347       --copy-links
348              Resolve symlinks during backup.  Enabling this will resolve &
349              back up the symlink's file/folder data instead of the symlink
350              itself, potentially increasing the size of the backup.
351
352
353       --dry-run
354              Calculate what would be done, but do not perform any backend
355              actions
356
357
358       --encrypt-key key-id
359              When backing up, encrypt to the given public key, instead of
360              using symmetric (traditional) encryption.  Can be specified
361              multiple times.  The key-id can be given in any of the formats
362              supported by GnuPG; see gpg(1), section "HOW TO SPECIFY A USER
363              ID" for details.
364
365
366       --encrypt-secret-keyring filename
367              This option can only be used with --encrypt-key, and changes the
368              path to the secret keyring for the encrypt key to filename This
369              keyring is not used when creating a backup. If not specified,
370              the default secret keyring is used which is usually located at
371              .gnupg/secring.gpg
372
373
374       --encrypt-sign-key key-id
375              Convenience parameter. Same as --encrypt-key key-id --sign-key
376              key-id.
377
378
379       --exclude shell_pattern
380              Exclude the file or files matched by shell_pattern.  If a
381              directory is matched, then files under that directory will also
382              be matched.  See the FILE SELECTION section for more
383              information.
384
385
386       --exclude-device-files
387              Exclude all device files.  This can be useful for
388              security/permissions reasons or if duplicity is not handling
389              device files correctly.
390
391
392       --exclude-filelist filename
393              Excludes the files listed in filename, with each line of the
394              filelist interpreted according to the same rules as --include
395              and --exclude.  See the FILE SELECTION section for more
396              information.
397
398
399       --exclude-if-present filename
400              Exclude directories if filename is present. Allows the user to
401              specify folders that they do not wish to backup by adding a
402              specified file (e.g. ".nobackup") instead of maintaining a
403              comprehensive exclude/include list.
404
405
406       --exclude-older-than time
407              Exclude any files whose modification date is earlier than the
408              specified time.  This can be used to produce a partial backup
409              that contains only recently changed files. See the TIME FORMATS
410              section for more information.
411
412
413       --exclude-other-filesystems
414              Exclude files on file systems (identified by device number)
415              other than the file system the root of the source directory is
416              on.
417
418
419       --exclude-regexp regexp
420              Exclude files matching the given regexp.  Unlike the --exclude
421              option, this option does not match files in a directory it
422              matches.  See the FILE SELECTION section for more information.
423
424
425       --files-from filename
426              Read a list of files to backup from filename rather than
427              searching the entire backup source directory. Operation is
428              otherwise normal, just on the specified subset of the backup
429              source directory.
430
431              Files must be specified one per line and relative to the backup
432              source directory. Any absolute paths will raise an error. All
433              characters per line are significant and treated as part of the
434              path, including leading and trailing whitespace. Lines are
435              separated by newlines or nulls, depending on whether the --null-
436              separator switch was given.
437
438              It is not necessary to include the parent directory of listed
439              files, their inclusion is implied. However, the content of any
440              explicitly listed directories is not implied. All required files
441              must be listed when this option is used.
442
443
444       --file-prefix prefix
445       --file-prefix-manifest prefix
446       --file-prefix-archive prefix
447       --file-prefix-signature prefix
448              Adds a prefix to either all files or only manifest, archive,
449              signature files.
450
451              The same set of prefixes must be passed in on backup and
452              restore.
453
454              If both global and type-specific prefixes are set, global prefix
455              will go before type-specific prefixes.
456
457              See also A NOTE ON FILENAME PREFIXES
458
459       --file-to-restore path
460              This option may be given in restore mode, causing only path to
461              be restored instead of the entire contents of the backup
462              archive.  path should be given relative to the root of the
463              directory backed up.
464
465       --filter-globbing
466       --filter-ignorecase
467       --filter-literal
468       --filter-regexp
469       --filter-strictcase
470              Change the interpretation of patterns passed to the file
471              selection condition option arguments --exclude and --include
472              (and variations thereof, including file lists). These options
473              can appear multiple times to switch between shell globbing
474              (default), literal strings, and regular expressions, case
475              sensitive (default) or not. The specified interpretation applies
476              for all subsequent selection conditions up until the next
477              --filter option.
478
479              See the FILE SELECTION section for more information.
480
481       --full-if-older-than time
482              Perform a full backup if an incremental backup is requested, but
483              the latest full backup in the collection is older than the given
484              time.  See the TIME FORMATS section for more information.
485
486       --force
487              Proceed even if data loss might result.  Duplicity will let the
488              user know when this option is required.
489
490       --ftp-passive
491              Use passive (PASV) data connections.  The default is to use
492              passive, but to fallback to regular if the passive connection
493              fails or times out.
494
495       --ftp-regular
496              Use regular (PORT) data connections.
497
498       --gio  Use the GIO backend and interpret any URLs as GIO would.
499
500       --hidden-encrypt-key key-id
501              Same as --encrypt-key, but it hides user's key id from encrypted
502              file. It uses the gpg's --hidden-recipient command to obfuscate
503              the owner of the backup. On restore, gpg will automatically try
504              all available secret keys in order to decrypt the backup. See
505              gpg(1) for more details.
506
507       --ignore-errors
508              Try to ignore certain errors if they happen. This option is only
509              intended to allow the restoration of a backup in the face of
510              certain problems that would otherwise cause the backup to fail.
511              It is not ever recommended to use this option unless you have a
512              situation where you are trying to restore from backup and it is
513              failing because of an issue which you want duplicity to ignore.
514              Even then, depending on the issue, this option may not have an
515              effect.
516
517              Please note that while ignored errors will be logged, there will
518              be no summary at the end of the operation to tell you what was
519              ignored, if anything. If this is used for emergency restoration
520              of data, it is recommended that you run the backup in such a way
521              that you can revisit the backup log (look for lines containing
522              the string IGNORED_ERROR).
523
524              If you ever have to use this option for reasons that are not
525              understood or understood but not your own responsibility, please
526              contact duplicity maintainers. The need to use this option under
527              production circumstances would normally be considered a bug.
528
529       --imap-full-address email_address
530              The full email address of the user name when logging into an
531              imap server.  If not supplied just the user name part of the
532              email address is used.
533
534       --imap-mailbox option
535              Allows you to specify a different mailbox.  The default is
536              "INBOX".  Other languages may require a different mailbox than
537              the default.
538
539       --gpg-binary file_path
540              Allows you to force duplicity to use file_path as gpg command
541              line binary. Can be an absolute or relative file path or a file
542              name.  Default value is 'gpg'. The binary will be localized via
543              the PATH environment variable.
544
545       --gpg-options options
546              Allows you to pass options to gpg encryption.  The options list
547              should be of the form "--opt1 --opt2=parm" where the string is
548              quoted and the only spaces allowed are between options.
549
550       --include shell_pattern
551              Similar to --exclude but include matched files instead.  Unlike
552              --exclude, this option will also match parent directories of
553              matched files (although not necessarily their contents).  See
554              the FILE SELECTION section for more information.
555
556       --include-filelist filename
557              Like --exclude-filelist, but include the listed files instead.
558              See the FILE SELECTION section for more information.
559
560       --include-regexp regexp
561              Include files matching the regular expression regexp.  Only
562              files explicitly matched by regexp will be included by this
563              option.  See the FILE SELECTION section for more information.
564
565       --log-fd number
566              Write specially-formatted versions of output messages to the
567              specified file descriptor.  The format used is designed to be
568              easily consumable by other programs.
569
570       --log-file filename
571              Write specially-formatted versions of output messages to the
572              specified file.  The format used is designed to be easily
573              consumable by other programs.
574
575       --max-blocksize number
576              determines the number of the blocks examined for changes during
577              the diff process.  For files < 1MB the blocksize is a constant
578              of 512.  For files over 1MB the size is given by:
579
580              file_blocksize = int((file_len / (2000 * 512)) * 512)
581              return min(file_blocksize, config.max_blocksize)
582
583              where config.max_blocksize defaults to 2048.  If you specify a
584              larger max_blocksize, your difftar files will be larger, but
585              your sigtar files will be smaller.  If you specify a smaller
586              max_blocksize, the reverse occurs.  The --max-blocksize option
587              should be in multiples of 512.
588
589       --name symbolicname
590              Set the symbolic name of the backup being operated on. The
591              intent is to use a separate name for each logically distinct
592              backup. For example, someone may use "home_daily_s3" for the
593              daily backup of a home directory to Amazon S3. The structure of
594              the name is up to the user, it is only important that the names
595              be distinct. The symbolic name is currently only used to affect
596              the expansion of --archive-dir , but may be used for additional
597              features in the future. Users running more than one distinct
598              backup are encouraged to use this option.
599
600              If not specified, the default value is a hash of the backend
601              URL.
602
603       --no-compression
604              Do not use GZip to compress files on remote system.
605
606       --no-encryption
607              Do not use GnuPG to encrypt files on remote system.
608
609       --no-print-statistics
610              By default duplicity will print statistics about the current
611              session after a successful backup.  This switch disables that
612              behavior.
613
614       --no-files-changed
615              By default duplicity will collect file names and change action
616              in memory (add, del, chg) during backup.  This can be quite
617              expensive in memory use, especially with millions of small
618              files.  This flag turns off that collection.  This means that
619              the --file-changed option for collection-status will return
620              nothing.
621
622       --null-separator
623              Use nulls (\0) instead of newlines (\n) as line separators,
624              which may help when dealing with filenames containing newlines.
625              This affects the expected format of the files specified by the
626              --{include|exclude}-filelist switches and the --{files-from}
627              option, as well as the format of the directory statistics file.
628
629       --numeric-owner
630              On restore always use the numeric uid/gid from the archive and
631              not the archived user/group names, which is the default
632              behaviour.  Recommended for restoring from live cds which might
633              have the users with identical names but different uids/gids.
634
635       --do-not-restore-ownership
636              Ignores the uid/gid from the archive and keeps the current
637              user's one.  Recommended for restoring data to mounted
638              filesystem which do not support Unix ownership or when root
639              privileges are not available.
640
641       --num-retries number
642              Number of retries to make on errors before giving up.
643
644       --old-filenames
645              Use the old filename format (incompatible with Windows/Samba)
646              rather than the new filename format.
647
648       --par2-options options
649              Verbatim options to pass to par2.
650
651       --par2-redundancy percent
652              Adjust the level of redundancy in percent for Par2 recovery
653              files (default 10%).
654
655       --par2-volumes number
656              Number of Par2 volumes to create (default 1).
657
658       --progress
659              When selected, duplicity will output the current upload progress
660              and estimated upload time. To annotate changes, it will perform
661              a first dry-run before a full or incremental, and then runs the
662              real operation estimating the real upload progress.
663
664       --progress-rate number
665              Sets the update rate at which duplicity will output the upload
666              progress messages (requires --progress option). Default is to
667              print the status each 3 seconds.
668
669       --rename <original path> <new path>
670              Treats the path orig in the backup as if it were the path new.
671              Can be passed multiple times. An example:
672
673              duplicity restore --rename Documents/metal Music/metal
674              sftp://uid@other.host/some_dir /home/me
675
676       --rsync-options options
677              Allows you to pass options to the rsync backend.  The options
678              list should be of the form "opt1=parm1 opt2=parm2" where the
679              option string is quoted and the only spaces allowed are between
680              options. The option string will be passed verbatim to rsync,
681              after any internally generated option designating the remote
682              port to use. Here is a possibly useful example:
683
684              duplicity --rsync-options="--partial-dir=.rsync-partial"
685              /home/me rsync://uid@other.host/some_dir
686
687       --s3-endpoint-url url
688              Specifies the endpoint URL of the S3 storage.
689
690              NOTE: Due to API restrictions the legacy backend boto will use
691              only the values scheme (protocol) and hostname from the given
692              url.  Choosing 'http://' will disable SSL encryption, just as if
693              --s3-unencrypted-connection were set.
694
695       --s3-european-buckets
696              When using the Amazon S3 backend, create buckets in Europe
697              instead of the default (requires --s3-use-new-style ). Also see
698              the EUROPEAN S3 BUCKETS section.
699
700              NOTE: This option does not apply when using the boto3 backend,
701              which does not create buckets.
702
703              See also A NOTE ON AMAZON S3 below.
704
705       --s3-multipart-chunk-size
706              Chunk size (in MB, default is 20MB) used for S3 multipart
707              uploads. Adjust this to maximize bandwidth usage. For example, a
708              chunk size of 10MB and a volsize of 100MB would result in 10
709              chunks per volume upload.
710
711              NOTE: This value should optimally be an even multiple of your
712              --volsize for optimal performance.
713
714              See also A NOTE ON AMAZON S3 below.
715
716       --s3-multipart-max-procs
717              Maximum number of concurrent uploads when performing a multipart
718              upload.  The default is 4. You can adjust this number to
719              maximizing bandwidth and CPU utilization.
720
721              NOTE: Too many concurrent uploads may have diminishing returns.
722
723              See also A NOTE ON AMAZON S3 below.
724
725       --s3-multipart-max-timeout
726              You can control the maximum time (in seconds) a multipart upload
727              can spend on uploading a single chunk to S3. This may be useful
728              if you find your system hanging on multipart uploads or if you'd
729              like to control the time variance when uploading to S3 to ensure
730              you kill connections to slow S3 endpoints.
731
732              NOTE: This has no effect when using boto3 backend.
733
734              See also A NOTE ON AMAZON S3 below.
735
736       --s3-region-name
737              Specifies the region of the S3 storage. Usually mandatory if the
738              bucket is created in a specific region.
739
740              NOTE: Only in boto3 backend.
741
742       --s3-unencrypted-connection
743              Disable SSL for connections to S3. This may be much faster, at
744              some cost to confidentiality.
745
746              With this option set, anyone between your computer and S3 can
747              observe the traffic and will be able to tell: that you are using
748              Duplicity, the name of the bucket, your AWS Access Key ID, the
749              increment dates and the amount of data in each increment.
750
751              This option affects only the connection, not the GPG encryption
752              of the backup increment files. Unless that is disabled, an
753              observer will not be able to see the file names or contents.
754
755              See also A NOTE ON AMAZON S3 below.
756
757       --s3-use-deep-archive
758              Store volumes using Glacier Deep Archive S3 when uploading to
759              Amazon S3. This storage class has a lower cost of storage but a
760              higher per-request cost along with delays of up to 48 hours from
761              the time of retrieval request. This storage cost is calculated
762              against a 180-day storage minimum. According to Amazon this
763              storage is ideal for data archiving and long-term backup
764              offering 99.999999999% durability.  To restore a backup you will
765              have to manually migrate all data stored on AWS Glacier Deep
766              Archive back to Standard S3 and wait for AWS to complete the
767              migration.
768
769              NOTE: Duplicity will store the manifest.gpg files from full and
770              incremental backups on AWS S3 standard storage to allow quick
771              retrieval for later incremental backups, all other data is
772              stored in S3 Glacier Deep Archive.
773
774       --s3-use-glacier
775              Store volumes using Glacier Flexible Storage when uploading to
776              Amazon S3. This storage class has a lower cost of storage but a
777              higher per-request cost along with delays of up to 12 hours from
778              the time of retrieval request. This storage cost is calculated
779              against a 90-day storage minimum. According to Amazon this
780              storage is ideal for data archiving and long-term backup
781              offering 99.999999999% durability.  To restore a backup you will
782              have to manually migrate all data stored on AWS Glacier back to
783              Standard S3 and wait for AWS to complete the migration.
784
785              NOTE: Duplicity will store the manifest.gpg files from full and
786              incremental backups on AWS S3 standard storage to allow quick
787              retrieval for later incremental backups, all other data is
788              stored in S3 Glacier.
789
790       --s3-use-glacier-ir
791              Store volumes using Glacier Instant Retrieval when uploading to
792              Amazon S3. This storage class is similar to Glacier Flexible
793              Storage but offers instant retrieval at standard speeds.
794
795              NOTE: Duplicity will store the manifest.gpg files from full and
796              incremental backups on AWS S3 standard storage to allow quick
797              retrieval for later incremental backups, all other data is
798              stored in S3 Glacier.
799
800       --s3-use-ia
801              Store volumes using Standard - Infrequent Access when uploading
802              to Amazon S3.  This storage class has a lower storage cost but a
803              higher per-request cost, and the storage cost is calculated
804              against a 30-day storage minimum. According to Amazon, this
805              storage is ideal for long-term file storage, backups, and
806              disaster recovery.
807
808       --s3-use-multiprocessing
809              Allow multipart volumne uploads to S3 through multiprocessing.
810              This option requires Python 2.6 and can be used to make uploads
811              to S3 more efficient.  If enabled, files duplicity uploads to S3
812              will be split into chunks and uploaded in parallel. Useful if
813              you want to saturate your bandwidth or if large files are
814              failing during upload.
815
816              NOTE: This has no effect when using the boto3 backend. Boto3
817              always attempts to use multiprocessing.
818
819              See also A NOTE ON AMAZON S3 below.
820
821       --s3-use-new-style
822              When operating on Amazon S3 buckets, use new-style subdomain
823              bucket addressing. This is now the preferred method to access
824              Amazon S3, but is not backwards compatible if your bucket name
825              contains upper-case characters or other characters that are not
826              valid in a hostname.
827
828              NOTE: This option has no effect when using the boto3 backend,
829              which will always use new style subdomain bucket naming.
830
831              See also A NOTE ON AMAZON S3 below.
832
833       --s3-use-onezone-ia
834              Store volumes using One Zone - Infrequent Access when uploading
835              to Amazon S3.  This storage is similar to Standard - Infrequent
836              Access, but only stores object data in one Availability Zone.
837
838       --s3-use-rrs
839              Store volumes using Reduced Redundancy Storage when uploading to
840              Amazon S3.  This will lower the cost of storage but also lower
841              the durability of stored volumes to 99.99% instead the
842              99.999999999% durability offered by Standard Storage on S3.
843
844       --s3-use-server-side-encryption
845              Allow use of server side encryption in S3
846
847       --s3-use-server-side-kms-encryption
848       --s3-kms-key-id key_id
849       --s3-kms-grant grant
850              Enable server-side encryption using key management service.
851
852       --scp-command command
853              (only ssh pexpect backend with --use-scp enabled) The command
854              will be used instead of "scp" to send or receive files.  To list
855              and delete existing files, the sftp command is used.
856              See also A NOTE ON SSH BACKENDS section SSH pexpect backend.
857
858       --sftp-command command
859              (only ssh pexpect backend) The command will be used instead of
860              "sftp".
861              See also A NOTE ON SSH BACKENDS section SSH pexpect backend.
862
863       --short-filenames
864              If this option is specified, the names of the files duplicity
865              writes will be shorter (about 30 chars) but less understandable.
866              This may be useful when backing up to MacOS or another OS or FS
867              that doesn't support long filenames.
868
869       --sign-key key-id
870              This option can be used when backing up, restoring or verifying.
871              When backing up, all backup files will be signed with keyid key.
872              When restoring, duplicity will signal an error if any remote
873              file is not signed with the given key-id. The key-id can be
874              given in any of the formats supported by GnuPG; see gpg(1),
875              section "HOW TO SPECIFY A USER ID" for details.  Should be
876              specified only once because currently only one signing key is
877              supported. Last entry overrides all other entries.
878              See also A NOTE ON SYMMETRIC ENCRYPTION AND SIGNING
879
880       --ssh-askpass
881              Tells the ssh backend to prompt the user for the remote system
882              password, if it was not defined in target url and no
883              FTP_PASSWORD env var is set.  This password is also used for
884              passphrase-protected ssh keys.
885
886       --ssh-options options
887              Allows you to pass options to the ssh backend.  Can be specified
888              multiple times or as a space separated options list.  The
889              options list should be of the form "-oOpt1='parm1'
890              -oOpt2='parm2'" where the option string is quoted and the only
891              spaces allowed are between options. The option string will be
892              passed verbatim to both scp and sftp, whose command line syntax
893              differs slightly hence the options should therefore be given in
894              the long option format described in ssh_config(5).
895
896              example of a list:
897
898              duplicity --ssh-options="-oProtocol=2
899              -oIdentityFile='/my/backup/id'" /home/me
900              scp://user@host/some_dir
901
902              example with multiple parameters:
903
904              duplicity --ssh-options="-oProtocol=2" --ssh-
905              options="-oIdentityFile='/my/backup/id'" /home/me
906              scp://user@host/some_dir
907
908              NOTE: The ssh paramiko backend currently supports only the -i or
909              -oIdentityFile or -oUserKnownHostsFile or -oGlobalKnownHostsFile
910              settings. If needed provide more host specific options via
911              ssh_config file.
912
913       --ssl-cacert-file file
914              (only webdav & lftp backend) Provide a cacert file for ssl
915              certificate verification.
916
917              See also A NOTE ON SSL CERTIFICATE VERIFICATION.
918
919       --ssl-cacert-path path/to/certs/
920              (only webdav backend and python 2.7.9+ OR lftp+webdavs and a
921              recent lftp) Provide a path to a folder containing cacert files
922              for ssl certificate verification.
923
924              See also A NOTE ON SSL CERTIFICATE VERIFICATION.
925
926       --ssl-no-check-certificate
927              (only webdav & lftp backend) Disable ssl certificate
928              verification.
929
930              See also A NOTE ON SSL CERTIFICATE VERIFICATION.
931
932       --swift-storage-policy
933              Use this storage policy when operating on Swift containers.
934
935              See also A NOTE ON SWIFT (OPENSTACK OBJECT STORAGE) ACCESS.
936
937       --metadata-sync-mode mode
938              This option defaults to 'partial', but you can set it to 'full'
939
940              Use 'partial' to avoid syncing metadata for backup chains that
941              you are not going to use.  This saves time when restoring for
942              the first time, and lets you restore an old backup that was
943              encrypted with a different passphrase by supplying only the
944              target passphrase.
945
946              Use 'full' to sync metadata for all backup chains on the remote.
947
948       --tempdir directory
949              Use this existing directory for duplicity temporary files
950              instead of the system default, which is usually the /tmp
951              directory. This option supersedes any environment variable.
952
953              See also ENVIRONMENT VARIABLES.
954
955       -ttime, --time time, --restore-time time
956              Specify the time from which to restore or list files.
957
958       --time-separator char
959              Use char as the time separator in filenames instead of colon
960              (":").
961
962       --timeout seconds
963              Use seconds as the socket timeout value if duplicity begins to
964              timeout during network operations.  The default is 30 seconds.
965
966       --use-agent
967              If this option is specified, then --use-agent is passed to the
968              GnuPG encryption process and it will try to connect to gpg-agent
969              before it asks for a passphrase for --encrypt-key or --sign-key
970              if needed.
971
972              NOTE: Contrary to previous versions of duplicity, this option
973              will also be honored by GnuPG 2 and newer versions. If GnuPG 2
974              is in use, duplicity passes the option --pinentry-mode=loopback
975              to the the gpg process unless --use-agent is specified on the
976              duplicity command line. This has the effect that GnuPG 2 uses
977              the agent only if --use-agent is given, just like GnuPG 1.
978
979       --verbosity level, -vlevel
980              Specify output verbosity level (log level).  Named levels and
981              corresponding values are 0 Error, 2 Warning, 4 Notice (default),
982              8 Info, 9 Debug (noisiest).
983              level may also be
984                     a character: e, w, n, i, d
985                     a word: error, warning, notice, info, debug
986
987              The options -v4, -vn and -vnotice are functionally equivalent,
988              as are the mixed/upper-case versions -vN, -vNotice and -vNOTICE.
989
990       --version
991              Print duplicity's version and quit.
992
993       --volsize number
994              Change the volume size to number MB. Default is 200MB.
995
996       --webdav-headers csv formatted key,value pairs
997              The input format is comma separated list of key,value pairs.
998              Standard CSV encoding may be used.
999
1000              For example to set a Cookie use 'Cookie,name=value', or
1001              '"Cookie","name=value"'.
1002
1003              You can set multiple headers, e.g.
1004              '"Cookie","name=value","Authorization","xxx"'.
1005

ENVIRONMENT VARIABLES

1007       TMPDIR, TEMP, TMP
1008              In decreasing order of importance, specifies the directory to
1009              use for temporary files (inherited from Python's tempfile
1010              module).  Eventually the option --tempdir supersedes any of
1011              these.
1012       FTP_PASSWORD
1013              Supported by most backends which are password capable. More
1014              secure than setting it in the backend url (which might be
1015              readable in the operating systems process listing to other users
1016              on the same machine).
1017       PASSPHRASE
1018              This passphrase is passed to GnuPG. If this is not set, the user
1019              will be prompted for the passphrase.
1020       SIGN_PASSPHRASE
1021              The passphrase to be used for --sign-key.  If omitted and sign
1022              key is also one of the keys to encrypt against PASSPHRASE will
1023              be reused instead.  Otherwise, if passphrase is needed but not
1024              set the user will be prompted for it.
1025
1026              Other environment variables may be used to configure specific
1027              backends.  See the notes for the particular backend.
1028

URL FORMAT

1030       Duplicity uses the URL format (as standard as possible) to define data
1031       locations.  Major difference is that the whole host section is optional
1032       for some backends.
1033       NOTE: If path starts with an extra '/' it usually denotes an absolute
1034       path on the backend.
1035
1036       The generic format for a URL is:
1037
1038              scheme://[[user[:password]@]host[:port]/][/]path
1039
1040       or
1041
1042              scheme://[/]path
1043
1044       It is not recommended to expose the password on the command line since
1045       it could be revealed to anyone with permissions to do process listings,
1046       it is permitted however.  Consider setting the environment variable
1047       FTP_PASSWORD instead, which is used by most, if not all backends,
1048       regardless of it's name.
1049
1050       In protocols that support it, the path may be preceded by a single
1051       slash, '/path', to represent a relative path to the target home
1052       directory, or preceded by a double slash, '//path', to represent an
1053       absolute filesystem path.
1054
1055       NOTE: Scheme (protocol) access may be provided by more than one
1056       backend.  In case the default backend is buggy or simply not working in
1057       a specific case it might be worth trying an alternative implementation.
1058       Alternative backends can be selected by prefixing the scheme with the
1059       name of the alternative backend e.g. ncftp+ftp:// and are mentioned
1060       below the scheme's syntax summary.
1061
1062       Formats of each of the URL schemes follow:
1063
1064       Amazon Drive Backend
1065              ad://some_dir
1066
1067              See also A NOTE ON AMAZON DRIVE
1068
1069       Azure
1070              azure://container-name
1071
1072              See also A NOTE ON AZURE ACCESS
1073
1074       B2
1075              b2://account_id[:application_key]@bucket_name/[folder/]
1076
1077       Box
1078              box:///some_dir[?config=path_to_config]
1079
1080              See also A NOTE ON BOX ACCESS
1081
1082       Cloud Files (Rackspace)
1083              cf+http://container_name
1084
1085              See also A NOTE ON CLOUD FILES ACCESS
1086
1087       Dropbox
1088              dpbx:///some_dir
1089
1090              Make sure to read A NOTE ON DROPBOX ACCESS first!
1091
1092       File (local file system)
1093              file://[relative|/absolute]/local/path
1094
1095       FISH (Files transferred over Shell protocol) over ssh
1096              fish://user[:password]@other.host[:port]/[relative|/absolute]_path
1097
1098       FTP
1099              ftp[s]://user[:password]@other.host[:port]/some_dir
1100
1101              NOTE: use lftp+, ncftp+ prefixes to enforce a specific backend,
1102              default is lftp+ftp://...
1103
1104       Google Cloud Storage (GCS via Interoperable Access)
1105              s3://bucket[/path]
1106
1107              NOTE: use boto+gs://bucket[/path] or boto+s3://bucket[/path] to
1108              use legacy boto backend. default is boto3+s3://
1109
1110              See A NOTE ON GOOGLE CLOUD STORAGE about needed endpoint option
1111              and env vars for authentication.
1112
1113       Google Docs
1114              gdocs://user[:password]@other.host/some_dir
1115
1116              NOTE: use pydrive+, gdata+ prefixes to enforce a specific
1117              backend, default is pydrive+gdocs://...
1118
1119       Google Drive
1120
1121              gdrive://<service account' email
1122              address>@developer.gserviceaccount.com/some_dir
1123
1124              See also A NOTE ON GDRIVE BACKEND below.
1125
1126       HSI
1127              hsi://user[:password]@other.host/some_dir
1128
1129       hubiC
1130              cf+hubic://container_name
1131
1132              See also A NOTE ON HUBIC
1133
1134       IMAP email storage
1135              imap[s]://user[:password]@host.com[/from_address_prefix]
1136
1137              See also A NOTE ON IMAP
1138
1139       MediaFire
1140              mf://user[:password]@mediafire.com/some_dir
1141
1142              See also A NOTE ON MEDIAFIRE BACKEND below.
1143
1144       MEGA.nz cloud storage (only works for accounts created prior to
1145       November 2018, uses "megatools")
1146              mega://user[:password]@mega.nz/some_dir
1147
1148              NOTE: if not given in the URL, relies on password being stored
1149              within $HOME/.megarc (as used by the "megatools" utilities)
1150
1151       MEGA.nz cloud storage (works for all MEGA accounts, uses "MEGAcmd"
1152       tools)
1153              megav2://user[:password]@mega.nz/some_dir
1154              megav3://user[:password]@mega.nz/some_dir[?no_logout=1] (For
1155              latest MEGAcmd)
1156
1157              NOTE: despite "MEGAcmd" no longer uses a configuration file, for
1158              convenience storing the user password this backend searches it
1159              in the $HOME/.megav2rc file (same syntax as the old
1160              $HOME/.megarc)
1161                  [Login]
1162                  Username = MEGA_USERNAME
1163                  Password = MEGA_PASSWORD
1164
1165       multi
1166              multi:///path/to/config.json
1167
1168              See also A NOTE ON MULTI BACKEND below.
1169
1170       OneDrive Backend
1171              onedrive://some_dir See also A NOTE ON ONEDRIVE BACKEND
1172
1173       Par2 Wrapper Backend
1174              par2+scheme://[user[:password]@]host[:port]/[/]path
1175
1176              See also A NOTE ON PAR2 WRAPPER BACKEND
1177
1178       Public Cloud Archive (OVH)
1179              pca://container_name[/prefix]
1180
1181              See also A NOTE ON PCA ACCESS
1182
1183       pydrive
1184              pydrive://<service account' email
1185              address>@developer.gserviceaccount.com/some_dir
1186
1187              See also A NOTE ON PYDRIVE BACKEND below.
1188
1189       Rclone Backend
1190              rclone://remote:/some_dir
1191
1192       See also A NOTE ON RCLONE BACKEND
1193
1194       Rsync via daemon
1195              rsync://user[:password]@host.com[:port]::[/]module/some_dir
1196
1197       Rsync over ssh (only key auth)
1198              rsync://user@host.com[:port]/[relative|/absolute]_path
1199
1200       S3 storage (Amazon)
1201              s3:///bucket_name[/path]
1202
1203              defaults to the boto3 backend boto3+s3://
1204              alternatively try the legacy boto backend
1205              boto+s3://host[:port]/bucket_name[/path]
1206
1207              For details see A NOTE ON AMAZON S3 below.
1208
1209       SCP/SFTP Secure Copy Protocol/SSH File Transfer Protocol
1210              scp://.. or
1211              sftp://user[:password]@other.host[:port]/[relative|/absolute]_path
1212
1213              defaults are paramiko+scp:// and paramiko+sftp://
1214              alternatively try pexpect+scp://, pexpect+sftp://, lftp+sftp://
1215              See also --ssh-askpass, --ssh-options and A NOTE ON SSH
1216              BACKENDS.
1217
1218       slate
1219              slate://[slate-id]
1220
1221              See also A NOTE ON SLATE BACKEND
1222
1223       Swift (Openstack)
1224              swift://container_name[/prefix]
1225
1226              See also A NOTE ON SWIFT (OPENSTACK OBJECT STORAGE) ACCESS
1227
1228       Tahoe-LAFS
1229              tahoe://alias/directory
1230
1231       WebDAV
1232              webdav[s]://user[:password]@other.host[:port]/some_dir
1233
1234              alternatively try lftp+webdav[s]://
1235
1236       Optical media (ISO9660 CD/DVD/Bluray using xorriso)
1237              xorriso:///dev/byOpticalDrive[:/path/to/directory/on/iso]
1238              xorriso:///path/to/image.iso[:/path/to/directory/on/iso]
1239
1240
1241              See also A NOTE ON THE XORRISO BACKEND
1242

TIME FORMATS

1244       duplicity uses time strings in two places.  Firstly, many of the files
1245       duplicity creates will have the time in their filenames in the w3
1246       datetime format as described in a w3 note at http://www.w3.org/TR/NOTE-
1247       datetime.  Basically they look like "2001-07-15T04:09:38-07:00", which
1248       means what it looks like.  The "-07:00" section means the time zone is
1249       7 hours behind UTC.
1250       Secondly, the -t, --time, and --restore-time options take a time
1251       string, which can be given in any of several formats:
1252       1.     the string "now" (refers to the current time)
1253       2.     a sequences of digits, like "123456890" (indicating the time in
1254              seconds after the epoch)
1255       3.     A string like "2002-01-25T07:00:00+02:00" in datetime format
1256       4.     An interval, which is a number followed by one of the characters
1257              s, m, h, D, W, M, or Y (indicating seconds, minutes, hours,
1258              days, weeks, months, or years respectively), or a series of such
1259              pairs.  In this case the string refers to the time that preceded
1260              the current time by the length of the interval.  For instance,
1261              "1h78m" indicates the time that was one hour and 78 minutes ago.
1262              The calendar here is unsophisticated: a month is always 30 days,
1263              a year is always 365 days, and a day is always 86400 seconds.
1264       5.     A date format of the form YYYY/MM/DD, YYYY-MM-DD, MM/DD/YYYY, or
1265              MM-DD-YYYY, which indicates midnight on the day in question,
1266              relative to the current time zone settings.  For instance,
1267              "2002/3/5", "03-05-2002", and "2002-3-05" all mean March 5th,
1268              2002.
1269

FILE SELECTION

1271       When duplicity is run, it searches through the given source directory
1272       and backs up all the files specified by the file selection system,
1273       unless --files-from has been specified in which case the passed list of
1274       individual files is used instead.
1275
1276       The file selection system comprises a number of file selection
1277       conditions, which are set using one of the following command line
1278       options:
1279
1280              --exclude
1281              --exclude-device-files
1282              --exclude-if-present
1283              --exclude-filelist
1284              --exclude-regexp
1285              --include
1286              --include-filelist
1287              --include-regexp
1288
1289       For each individual file found in the source directory, the file
1290       selection conditions are checked in the order they are specified on the
1291       command line.  Should a selection condition match, the file will be
1292       included or excluded accordingly and the file selection system will
1293       proceed to the next file without checking the remaining conditions.
1294
1295       Earlier arguments therefore take precedence where multiple conditions
1296       match any given file, and are thus usually given in order of decreasing
1297       specificity.  If no selection conditions match a given file, then the
1298       file is implicitly included.
1299
1300       For example,
1301
1302              duplicity --include /usr --exclude /usr /usr
1303              scp://user@host/backup
1304
1305       is exactly the same as
1306
1307              duplicity /usr scp://user@host/backup
1308
1309       because the --include directive matches all files in the backup source
1310       directory, and takes precedence over the contradicting --exclude option
1311       as it comes first.
1312
1313       As a more meaningful example,
1314
1315              duplicity --include /usr/local/bin --exclude /usr/local /usr
1316              scp://user@host/backup
1317
1318       would backup the /usr/local/bin directory (and its contents), but not
1319       /usr/local/doc. Note that this is not the same as simply specifying
1320       /usr/local/bin as the backup source, as other files and folders under
1321       /usr will also be (implicitly) included.
1322
1323       The order of the --include and --exclude arguments is important. In the
1324       previous example, if the less specific --exclude directive had
1325       precedence it would prevent the more specific --include from matching
1326       any files.
1327
1328       The patterns passed to the --include, --exclude, --include-filelist,
1329       and --exclude-filelist options are interpretted as extended shell
1330       globbing patterns by default. This behaviour can be changed with the
1331       following filter mode arguments:
1332
1333              --filter-globbing
1334              --filter-literal
1335              --filter-regexp
1336
1337       These arguments change the interpretation of the patterns used in
1338       selection conditions, affecting all subsequent file selection options
1339       passed on the command line. They may be specified multiple times in
1340       order to switch pattern interpretations as needed.
1341
1342       Literal strings differ from globs in that the pattern must match the
1343       filename exactly. This can be useful where filenames contain characters
1344       which have special meaning in shell globs or regular expressions. If
1345       passing dynamically generated file lists to duplicity using the
1346       --include-filelist or --exclude-filelist options, then the use of
1347       --filter-literal is recommended unless regular expression or globbing
1348       is specifically required.
1349
1350       The regular expression language used for selection conditions specified
1351       with --include-regexp , --exclude-regexp , or when --filter-regexp is
1352       in effect is as implemented by the Python standard library.
1353
1354       Extended shell globbing pattenrs may contain: *, **, ?, and [...]
1355       (character ranges). As in a normal shell, * can be expanded to any
1356       string of characters not containing "/", ?  expands to any single
1357       character except "/", and [...]  expands to a single character of those
1358       characters specified (ranges are acceptable).  The pattern ** expands
1359       to any string of characters whether or not it contains "/".
1360
1361       In addition to the above filter mode arguments, the following can be
1362       used in the same fashion to enable (default) or disable case
1363       sensitivity in the evaluation of file sslection conditions:
1364
1365              --filter-ignorecase
1366              --filter-strictcase
1367
1368       An example of filter mode switching including case insensitivity is
1369
1370              --filter-ignorecase --include /usr/bin/*.PY --filter-literal
1371              --filter-include /usr/bin/special?file*name --filter-strictcase
1372              --exclude /usr/bin
1373
1374       which would backup *.py, *.pY, *.Py, and *.PY files under /usr/bin and
1375       also the single literally specified file with globbing characters in
1376       the name. The use of --filter-strictcase is not technically necessary
1377       here, but is included as an example which may (depending on the backup
1378       source path) cause unexpected interactions between --include and
1379       --exclude options, should the directory portion of the path (/usr/bin)
1380       contain any uppercase characters.
1381
1382       If the pattern starts with "ignorecase:" (case insensitive), then this
1383       prefix will be removed and any character in the string can be replaced
1384       with an upper- or lowercase version of itself. This prefix is a legacy
1385       feature supported for shell globbing selection conditions only, but for
1386       backward compatibility reasons is otherwise considered part of the
1387       pattern itself (use --filter-ignorecase instead).
1388
1389       Remember that you may need to quote patterns when typing them into a
1390       shell, so the shell does not interpret the globbing patterns or
1391       whitespace characters before duplicity sees them.
1392
1393       Selection patterns should generally be thought of as filesystem paths
1394       rather than arbitrary strings. For selection conditions using extended
1395       shell globbing patterns, the --exclude pattern option matches a file
1396       if:
1397
1398       1.  pattern can be expanded into the file's filename, or
1399       2.  the file is inside a directory matched by the option.
1400
1401       Conversely, the --include pattern option matches a file if:
1402
1403       1.  pattern can be expanded into the file's filename, or
1404       2.  the file is inside a directory matched by the option, or
1405       3.  the file is a directory which contains a file matched by the
1406       option.
1407
1408       For example,
1409
1410              --exclude /usr/local
1411
1412       matches e.g. /usr/local, /usr/local/lib, and /usr/local/lib/netscape.
1413       It is the same as --exclude /usr/local --exclude '/usr/local/**'.
1414       On the other hand
1415
1416              --include /usr/local
1417
1418       specifies that /usr, /usr/local, /usr/local/lib, and
1419       /usr/local/lib/netscape (but not /usr/doc) all be backed up. Thus you
1420       don't have to worry about including parent directories to make sure
1421       that included subdirectories have somewhere to go.
1422
1423       Finally,
1424
1425              --include ignorecase:'/usr/[a-z0-9]foo/*/**.py'
1426
1427       would match a file like /usR/5fOO/hello/there/world.py.  If it did
1428       match anything, it would also match /usr.  If there is no existing file
1429       that the given pattern can be expanded into, the option will not match
1430       /usr alone.
1431
1432       This treatment of patterns in globbing and literal selection conditions
1433       as filesystem paths reduces the number of explicit conditions required.
1434       However, it does require that the paths described by all variants of
1435       the --include or --include option are fully specified relative to the
1436       backup source directory.
1437
1438       For selection conditions using literal strings, the same logic applies
1439       except that scenario 1 is for an exact match of the pattern.
1440
1441       For selection conditions using regular expressions the pattern is
1442       evaluated as a regular expression rather than a filesystem path.
1443       Scenario 3 in the above therefore does not apply, the implications of
1444       which are discussed at the end of this section.
1445
1446       The --include-filelist, and --exclude-filelist, options also introduce
1447       file selection conditions.  They direct duplicity to read in a text
1448       file (either ASCII or UTF-8), each line of which is a file
1449       specification, and to include or exclude the matching files.  Lines are
1450       separated by newlines or nulls, depending on whether the --null-
1451       separator switch was given.
1452
1453       Each line in the filelist will be interpreted as a selection pattern in
1454       the same way --include and --exclude options are interpreted, except
1455       that lines starting with "+ " are interpreted as include directives,
1456       even if found in a filelist referenced by --exclude-filelist.
1457       Similarly, lines starting with "- " exclude files even if they are
1458       found within an include filelist.
1459
1460       For example, if file "list.txt" contains the lines:
1461
1462              /usr/local
1463              - /usr/local/doc
1464              /usr/local/bin
1465              + /var
1466              - /var
1467
1468       then --include-filelist list.txt would include /usr, /usr/local, and
1469       /usr/local/bin.  It would exclude /usr/local/doc,
1470       /usr/local/doc/python, etc.  It would also include /usr/local/man, as
1471       this is included within /usr/local.  Finally, it is undefined what
1472       happens with /var.  A single file list should not contain conflicting
1473       file specifications.
1474
1475       Each line in the filelist will be interpreted as per the current filter
1476       mode in the same way --include and --exclude options are interpreted.
1477       For instance, if the file "list.txt" contains the lines:
1478
1479              dir/foo
1480              + dir/bar
1481              - **
1482
1483       Then --include-filelist list.txt would be exactly the same as
1484       specifying --include dir/foo --include dir/bar --exclude ** on the
1485       command line.
1486
1487       Note that specifying very large numbers numbers of selection rules as
1488       filelists can incur a substantial performance penalty as these rules
1489       will (potentially) be checked for every file in the backup source
1490       directory. If you need to backup arbitrary lists of specific files
1491       (i.e. not described by regexp patterns or shell globs) then --files-
1492       from is likely to be more performant.
1493
1494       Finally, the --include-regexp and --exclude-regexp options allow files
1495       to be included and excluded if their filenames match a regular
1496       expression.  Regular expression syntax is too complicated to explain
1497       here, but is covered in Python's library reference.  Unlike the
1498       --include and --exclude options, the regular expression options don't
1499       match files containing or contained in matched files.  So for instance
1500
1501              --include-regexp '[0-9]{7}(?!foo)'
1502
1503       matches any files whose full pathnames contain 7 consecutive digits
1504       which aren't followed by 'foo'.  However, it wouldn't match /home even
1505       if /home/ben/1234567 existed.
1506

A NOTE ON AMAZON DRIVE

1508       1.     The API Keys used for Amazon Drive have not been granted
1509              production limits.  Amazon do not say what the development
1510              limits are and are not replying to to requests to whitelist
1511              duplicity. A related tool, acd_cli, was demoted to development
1512              limits, but continues to work fine except for cases of excessive
1513              usage. If you experience throttling and similar issues with
1514              Amazon Drive using this backend, please report them to the
1515              mailing list.
1516       2.     If you previously used the acd+acdcli backend, it is strongly
1517              recommended to update to the ad backend instead, since it
1518              interfaces directly with Amazon Drive. You will need to setup
1519              the OAuth once again, but can otherwise keep your backups and
1520              config.
1521

A NOTE ON AMAZON S3

1523       When backing up to Amazon S3, two backend implementations are
1524       available.  The older boto library, which is deprecated and is no
1525       longer maintained.  And the recent boto3 backend based on the newer
1526       boto3 library. The new backend fixes several known limitations in the
1527       older backend, which developed as Amazon S3 evolved.
1528
1529       The boto3 backend should behave largely the same as the older backend,
1530       but there are some differences in the supported "--s3-..." options.
1531       Additionally, there are some compatibility differences.
1532       See the documentation of each option above regarding differences
1533       related to each backend.
1534
1535       The boto3 backend does not support bucket creation.  This deliberate
1536       choice simplifies the code, and side steps problems related to region
1537       selection.  Additionally, it is probably not a good practice to give
1538       your backup role bucket creation rights.  In most cases the role used
1539       for backups should probably be limited to specific buckets.
1540
1541       The boto3 backend only supports newer domain style buckets. Amazon is
1542       moving to deprecate the older bucket style, so migration is
1543       recommended.  Use the boto backend for compatibility with buckets using
1544       older naming conventions.
1545
1546       The boto3 backend does not currently support initiating restores from
1547       the glacier storage class.  When restoring a backup from glacier or
1548       glacier deep archive, the backup files must first be restored out of
1549       band.  There are multiple options when restoring backups from cold
1550       storage, which vary in both cost and speed.  See Amazon's documentation
1551       for details.
1552
1553       Both backends use environment variables for authentication:
1554              AWS_ACCESS_KEY_ID (required),
1555              AWS_SECRET_ACCESS_KEY (required)
1556              or
1557              BOTO_CONFIG (required) pointing to a boto config file.
1558       For simplicity's sake we will document the use of the AWS_* vars only.
1559       Research boto documentation available in the web if you want to use
1560       the config file.
1561
1562       boto3 backend example backup command line:
1563
1564              AWS_ACCESS_KEY_ID=<key_id> AWS_SECRET_ACCESS_KEY=<access_key>
1565              duplicity /some/path s3:///bucket/subfolder
1566
1567       you may add --s3-endpoint-url (to access non Amazon S3 services or
1568       regional endpoints) and may need --s3-region-name (for buckets created
1569       in specific regions) and other --s3-...  options documented above.
1570
1571       legacy boto backend example backup command line:
1572
1573              AWS_ACCESS_KEY_ID=<key_id> AWS_SECRET_ACCESS_KEY=<access_key>
1574              duplicity /some/path boto+s3://[host:port]/bucket/subfolder
1575
1576       The url host setting is optional and allows one to define a custom
1577       endpoint host. you may add --s3-european-buckets and other s3 options
1578       documented above if needed.
1579
1580

A NOTE ON AZURE ACCESS

1582       The Azure backend requires the Microsoft Azure Storage Blobs client
1583       library for Python to be installed on the system.  See REQUIREMENTS.
1584
1585       It uses the environment variable AZURE_CONNECTION_STRING (required).
1586       This string contains all necessary information such as Storage Account
1587       name and the key for authentication.  You can find it under Access Keys
1588       for the storage account.
1589
1590       Duplicity will take care to create the container when performing the
1591       backup.  Do not create it manually before.
1592
1593       A container name (as given as the backup url) must be a valid DNS name,
1594       conforming to the following naming rules:
1595
1596              1.     Container names must start with a letter or number, and
1597                     can contain only letters, numbers, and the dash (-)
1598                     character.
1599              2.     Every dash (-) character must be immediately preceded and
1600                     followed by a letter or number; consecutive dashes are
1601                     not permitted in container names.
1602              3.     All letters in a container name must be lowercase.
1603              4.     Container names must be from 3 through 63 characters
1604                     long.
1605
1606       These rules come from Azure; see https://docs.microsoft.com/en-
1607       us/rest/api/storageservices/naming-and-referencing-
1608       containers--blobs--and-metadata
1609

A NOTE ON BOX ACCESS

1611       The box backend requires boxsdk with jwt support to be installed on the
1612       system.  See REQUIREMENTS.
1613
1614       It uses the environment variable BOX_CONFIG_PATH (optional).  This
1615       string contains the path to box custom app's config.json. Either this
1616       environment variable or the config query parameter in the url need to
1617       be specified, if both are specified, query parameter takes precedence.
1618
1619   Create a Box custom app
1620       In order to use box backend, user need to create a box custom app in
1621       the box developer console (https://app.box.com/developers/console).
1622
1623       After create a new custom app, please make sure it is configured as
1624       follow:
1625
1626              1.     Choose "App Access Only" for "App Access Level"
1627              2.     Check "Write all files and folders stored in Box"
1628              3.     Generate a Public/Private Keypair
1629
1630       The user also need to grant the created custom app permission in the
1631       admin console (https://app.box.com/master/custom-apps) by clicking the
1632       "+" button and enter the client_id which can be found on the custom
1633       app's configuration page.
1634

A NOTE ON CLOUD FILES ACCESS

1636       Pyrax is Rackspace's next-generation Cloud management API, including
1637       Cloud Files access.  The cfpyrax backend requires the pyrax library to
1638       be installed on the system.  See REQUIREMENTS.
1639
1640       Cloudfiles is Rackspace's now deprecated implementation of OpenStack
1641       Object Storage protocol.  Users wishing to use Duplicity with Rackspace
1642       Cloud Files should migrate to the new Pyrax plugin to ensure support.
1643
1644       The backend requires python-cloudfiles to be installed on the system.
1645       See REQUIREMENTS.
1646
1647       It uses three environment variables for authentication:
1648       CLOUDFILES_USERNAME (required), CLOUDFILES_APIKEY (required),
1649       CLOUDFILES_AUTHURL (optional)
1650
1651       If CLOUDFILES_AUTHURL is unspecified it will default to the value
1652       provided by python-cloudfiles, which points to rackspace, hence this
1653       value must be set in order to use other cloud files providers.
1654

A NOTE ON DROPBOX ACCESS

1656       1.     First of all Dropbox backend requires valid authentication
1657              token. It should be passed via DPBX_ACCESS_TOKEN environment
1658              variable.
1659              To obtain it please create 'Dropbox API' application at:
1660              https://www.dropbox.com/developers/apps/create
1661              Then visit app settings and just use 'Generated access token'
1662              under OAuth2 section.
1663              Alternatively you can let duplicity generate access token
1664              itself. In such case temporary export DPBX_APP_KEY ,
1665              DPBX_APP_SECRET using values from app settings page and run
1666              duplicity interactively.
1667              It will print the URL that you need to open in the browser to
1668              obtain OAuth2 token for the application. Just follow on-screen
1669              instructions and then put generated token to DPBX_ACCESS_TOKEN
1670              variable. Once done, feel free to unset DPBX_APP_KEY and
1671              DPBX_APP_SECRET
1672
1673       2.     "some_dir" must already exist in the Dropbox folder. Depending
1674              on access token kind it may be:
1675                     Full Dropbox: path is absolute and starts from 'Dropbox'
1676                     root folder.
1677                     App Folder: path is related to application folder.
1678                     Dropbox client will show it in ~/Dropbox/Apps/<app-name>
1679
1680       3.     When using Dropbox for storage, be aware that all files,
1681              including the ones in the Apps folder, will be synced to all
1682              connected computers.  You may prefer to use a separate Dropbox
1683              account specially for the backups, and not connect any computers
1684              to that account. Alternatively you can configure selective sync
1685              on all computers to avoid syncing of backup files
1686

A NOTE ON EUROPEAN S3 BUCKETS

1688       Amazon S3 provides the ability to choose the location of a bucket upon
1689       its creation. The purpose is to enable the user to choose a location
1690       which is better located network topologically relative to the user,
1691       because it may allow for faster data transfers.
1692       duplicity will create a new bucket the first time a bucket access is
1693       attempted. At this point, the bucket will be created in Europe if
1694       --s3-european-buckets was given. For reasons having to do with how the
1695       Amazon S3 service works, this also requires the use of the --s3-use-
1696       new-style option. This option turns on subdomain based bucket
1697       addressing in S3. The details are beyond the scope of this man page,
1698       but it is important to know that your bucket must not contain upper
1699       case letters or any other characters that are not valid parts of a
1700       hostname. Consequently, for reasons of backwards compatibility, use of
1701       subdomain based bucket addressing is not enabled by default.
1702       Note that you will need to use --s3-use-new-style for all operations on
1703       European buckets; not just upon initial creation.
1704       You only need to use --s3-european-buckets upon initial creation, but
1705       you may may use it at all times for consistency.
1706       Further note that when creating a new European bucket, it can take a
1707       while before the bucket is fully accessible. At the time of this
1708       writing it is unclear to what extent this is an expected feature of
1709       Amazon S3, but in practice you may experience timeouts, socket errors
1710       or HTTP errors when trying to upload files to your newly created
1711       bucket. Give it a few minutes and the bucket should function normally.
1712

A NOTE ON FILENAME PREFIXES

1714       Filename prefixes can be used in multi backend with mirror mode to
1715       define affinity rules. They can also be used in conjunction with S3
1716       lifecycle rules to transition archive files to Glacier, while keeping
1717       metadata (signature and manifest files) on S3.
1718
1719       Duplicity does not require access to archive files except when
1720       restoring from backup.
1721

A NOTE ON GOOGLE CLOUD STORAGE (GCS via Interoperable Access)

1723   Overview
1724       Duplicity access to GCS currently relies on it's Interoperability API
1725       (basically S3 for GCS).  This needs to actively be enabled before
1726       access is possible. For details read the next section Preparations
1727       below.
1728       Two backends are available to access S3 namely boto3 which is used via
1729       s3:// (alias for boto3+s3:// ) and the legacy boto backend, usable via
1730       boto+s3://.
1731
1732   Preparations
1733       1.     login on https://console.cloud.google.com/
1734       2.     go to Cloud Storage->Settings->Interoperability
1735       3.     create a Service account (if needed)
1736       4.     create Service account HMAC access key and secret (!!instantly
1737              copy!!  the secret, it can NOT be recovered later)
1738       5.     go to Cloud Storage->Browser
1739       6.     create a bucket
1740       7.     add permissions for Service account that was used to set up
1741              Interoperability access above
1742
1743       Once set up you can use the generated Interoperable Storage Access key
1744       and secret and pass them to duplicity as described in the next section.
1745
1746   Usage
1747       The following examples show accessing GCS via S3 for a collection-
1748       status action.  The shown env vars, options and url format can be
1749       applied for all other actions as well of course.
1750
1751       using boto3 supplying the --s3-endpoint-url manually.
1752
1753               AWS_ACCESS_KEY_ID=<keyid> AWS_SECRET_ACCESS_KEY=<secret>
1754              duplicity collection-status s3://<bucket>/<folder>
1755              --s3-endpoint-url=https://storage.googleapis.com
1756
1757       or alternatively with legacy boto using either boto+gs://.
1758
1759              GS_ACCESS_KEY_ID=<keyid> GS_SECRET_ACCESS_KEY=<secret> duplicity
1760              collection-status boto+gs://<bucket>/<folder>
1761
1762              NOTE: The auth env vars are prefixed GS_ in this case!
1763
1764       or boto+s3:// supplying the --s3-endpoint-url manually.
1765
1766               AWS_ACCESS_KEY_ID=<keyid> AWS_SECRET_ACCESS_KEY=<secret>
1767              duplicity collection-status s3://<bucket>/<folder>
1768              --s3-endpoint-url=https://storage.googleapis.com
1769
1770       Alternatively, you can run gsutil config -a to have the Google Cloud
1771       Storage utility populate the ~/.boto configuration file.
1772
1773       NOTE: Also see section URL FORMAT for a brief overview about the url
1774       format expected.
1775

A NOTE ON GDRIVE BACKEND

1777       GDrive: is a rewritten PyDrive: backend with less dependencies, and a
1778       simpler setup - it uses the JSON keys downloaded directly from Google
1779       Cloud Console.
1780
1781       Note Google has 2 drive methods, `Shared(previously Team) Drives` and
1782       `My Drive`, both can be shared but require different addressing
1783
1784       For a Google Shared Drives folder
1785
1786       Share Drive ID specified as a query parameter, driveID,  in the backend
1787       URL.  Example:
1788             gdrive://developer.gserviceaccount.com/target-
1789       folder/?driveID=<SHARED DRIVE ID>
1790
1791       For a Google My Drive based shared folder
1792
1793       MyDrive folder ID specified as a query parameter, myDriveFolderID, in
1794       the backend URL Example
1795             export GOOGLE_SERVICE_ACCOUNT_URL=<serviceaccount-
1796       name>@<serviceaccount-name>.iam.gserviceaccount.com
1797             gdrive://${GOOGLE_SERVICE_ACCOUNT_URL}/<target-folder-name-in-
1798       myDriveFolder>?myDriveFolderID=root
1799
1800
1801       There are also two ways to authenticate to use GDrive: with a regular
1802       account or with a "service account". With a service account, a separate
1803       account is created, that is only accessible with Google APIs and not a
1804       web login.  With a regular account, you can store backups in your
1805       normal Google Drive.
1806
1807       To use a service account, go to the Google developers console at
1808       https://console.developers.google.com. Create a project, and make sure
1809       Drive API is enabled for the project. In the "Credentials" section,
1810       click "Create credentials", then select Service Account with JSON key.
1811
1812       The GOOGLE_SERVICE_JSON_FILE environment variable needs to contain the
1813       path to the JSON file on duplicity invocation.
1814
1815       export GOOGLE_SERVICE_JSON_FILE=<path-to-serviceaccount-
1816       credentials.json>
1817
1818
1819       The alternative is to use a regular account. To do this, start as
1820       above, but when creating a new Client ID, select "Create OAuth client
1821       ID", with application type of "Desktop app". Download the
1822       client_secret.json file for the new client, and set the
1823       GOOGLE_CLIENT_SECRET_JSON_FILE environment variable to the path to this
1824       file, and GOOGLE_CREDENTIALS_FILE to a path to a file where duplicity
1825       will keep the authentication token - this location must be writable.
1826
1827       NOTE: As a sanity check, GDrive checks the host and username from the
1828       URL against the JSON key, and refuses to proceed if the addresses do
1829       not match. Either the email (for the service accounts) or Client ID
1830       (for regular OAuth accounts) must be present in the URL. See URL FORMAT
1831       above.
1832
1833   First run / OAuth 2.0 authorization
1834       During the first run, you will be prompted to visit an URL in your
1835       browser to grant access to your Google Drive. A temporary HTTP-service
1836       will be started on a local network interface for this purpose (by
1837       default on http://localhost:8080/).  Ip-address/host and port can be
1838       adjusted if need be by providing the environment variables
1839       GOOGLE_OAUTH_LOCAL_SERVER_HOST, GOOGLE_OAUTH_LOCAL_SERVER_PORT
1840       respectively.
1841
1842       If you are running duplicity in a remote location, you will need to
1843       make sure that you will be able to access the above HTTP-service with a
1844       browser utilizing e.g. port forwarding or temporary firewall
1845       permission.
1846
1847       The access credentials will be saved in the JSON file mentioned above
1848       for future use after a successful authorization.
1849

A NOTE ON HUBIC

1851       The hubic backend requires the pyrax library to be installed on the
1852       system. See REQUIREMENTS.  You will need to set your credentials for
1853       hubiC in a file called ~/.hubic_credentials, following this pattern:
1854              [hubic]
1855              email = your_email
1856              password = your_password
1857              client_id = api_client_id
1858              client_secret = api_secret_key
1859              redirect_uri = http://localhost/
1860

A NOTE ON IMAP

1862       An IMAP account can be used as a target for the upload.  The userid may
1863       be specified and the password will be requested.
1864       The from_address_prefix may be specified (and probably should be). The
1865       text will be used as the "From" address in the IMAP server.  Then on a
1866       restore (or list) command the from_address_prefix will distinguish
1867       between different backups.
1868

A NOTE ON MEDIAFIRE BACKEND

1870       This backend requires mediafire python library to be installed on the
1871       system. See REQUIREMENTS.
1872
1873       Use URL escaping for username (and password, if provided via command
1874       line):
1875
1876              mf://duplicity%40example.com@mediafire.com/some_folder
1877       The destination folder will be created for you if it does not exist.
1878

A NOTE ON MULTI BACKEND

1880       The multi backend allows duplicity to combine the storage available in
1881       more than one backend store (e.g., you can store across a google drive
1882       account and a onedrive account to get effectively the combined storage
1883       available in both).  The URL path specifies a JSON formatted config
1884       file containing a list of the backends it will use. The URL may also
1885       specify "query" parameters to configure overall behavior.  Each element
1886       of the list must have a "url" element, and may also contain an optional
1887       "description" and an optional "env" list of environment variables used
1888       to configure that backend.
1889   Query Parameters
1890       Query parameters come after the file URL in standard HTTP format for
1891       example:
1892              multi:///path/to/config.json?mode=mirror&onfail=abort
1893              multi:///path/to/config.json?mode=stripe&onfail=continue
1894              multi:///path/to/config.json?onfail=abort&mode=stripe
1895              multi:///path/to/config.json?onfail=abort
1896       Order does not matter, however unrecognized parameters are considered
1897       an error.
1898       mode=stripe
1899              This mode (the default) performs round-robin access to the list
1900              of backends. In this mode, all backends must be reliable as a
1901              loss of one means a loss of one of the archive files.
1902       mode=mirror
1903              This mode accesses backends as a RAID1-store, storing every file
1904              in every backend and reading files from the first-successful
1905              backend.  A loss of any backend should result in no failure.
1906              Note that backends added later will only get new files and may
1907              require a manual sync with one of the other operating ones.
1908       onfail=continue
1909              This setting (the default) continues all write operations in as
1910              best-effort. Any failure results in the next backend tried.
1911              Failure is reported only when all backends fail a given
1912              operation with the error result from the last failure.
1913       onfail=abort
1914              This setting considers any backend write failure as a
1915              terminating condition and reports the error.  Data reading and
1916              listing operations are independent of this and will try with the
1917              next backend on failure.
1918   JSON File Example
1919              [
1920               {
1921                "description": "a comment about the backend"
1922                "url": "abackend://myuser@domain.com/backup",
1923                "env": [
1924                  {
1925                   "name" : "MYENV",
1926                   "value" : "xyz"
1927                  },
1928                  {
1929                   "name" : "FOO",
1930                   "value" : "bar"
1931                  }
1932                 ],
1933                 "prefixes": ["prefix1_", "prefix2_"]
1934               },
1935               {
1936                "url": "file:///path/to/dir"
1937               }
1938              ]
1939

A NOTE ON ONEDRIVE BACKEND

1941       onedrive:// works with both personal and business onedrive as well as
1942       sharepoint drives.  On first use you be provided with an URL to with a
1943       microsoft account. Open it in your web browser.
1944
1945       After authenticating, copy the redirected URL back to duplicity.
1946       Duplicity will fetch a token and store it in
1947       ~/.duplicity_onedrive_oauthtoken.json. This location can be overridden
1948       by setting the DUPLICITY_ONEDRIVE_TOKEN environment variable.
1949
1950       Duplicity uses a default App ID registered with Microsoft Azure AD.  It
1951       will need to be approved by an administrator of your Office365 Tenant
1952       on a business account.
1953
1954   Register and set your own microsoft app id
1955       1.     visit https://portal.azure.com
1956
1957       2.     Choose "Enterprise Applications", then "Create your own
1958              Application"
1959
1960       3.     Input your application name and select "Register an application
1961              to integrate with Azure AD".
1962
1963       4.     Continue to the next page and set the redirect uri to
1964              "https://login.microsoftonline.com/common/oauth2/nativeclient",
1965              choosing "Public client/native" from the dropdown. Click create.
1966
1967       5.     Find the application id in "Enterprise Applications" and set the
1968              environment variable DUPLICITY_ONEDRIVE_CLIENT_ID to it.
1969
1970               More information on Microsoft Apps at:
1971              https://learn.microsoft.com/en-us/azure/active-
1972              directory/develop/quickstart-register-app
1973
1974   Backup to a sharepoint site instead of onedrive
1975       to use a sharepoint site you need to find and provide the site's tenant
1976       and site id.
1977
1978       1.     Login with your Microsoft Account at
1979              https://<o365_tenant>.sharepoint.com/
1980
1981       2.     Navigate to
1982              https://<o365_tenant>.sharepoint.com/sites/<path_to_site>/_api/site/id
1983
1984       3.     Copy the disyplayed UUID (site_id) and set the
1985              DUPLICITY_ONEDRIVE_ROOT environment variable to
1986              "sites/<o365_tenant>.sharepoint.com,<site_id>/drive"
1987

A NOTE ON PAR2 WRAPPER BACKEND

1989       Par2 Wrapper Backend can be used in combination with all other backends
1990       to create recovery files. Just add par2+ before a regular scheme (e.g.
1991       par2+ftp://user@host/dir or par2+s3+http://bucket_name ). This will
1992       create par2 recovery files for each archive and upload them all to the
1993       wrapped backend.
1994       Before restoring, archives will be verified. Corrupt archives will be
1995       repaired on the fly if there are enough recovery blocks available.
1996       Use --par2-redundancy percent to adjust the size (and redundancy) of
1997       recovery files in percent.
1998

A NOTE ON PCA ACCESS

2000       PCA is a long-term data archival solution by OVH. It runs a slightly
2001       modified version of Openstack Swift introducing latency in the data
2002       retrieval process.  It is a good pick for a multi backend configuration
2003       where receiving volumes while another backend is used to store
2004       manifests and signatures.
2005
2006       The backend requires python-switclient to be installed on the system.
2007       python-keystoneclient is also needed to interact with OpenStack's
2008       Keystone Identity service.  See REQUIREMENTS.
2009
2010       It uses following environment variables for authentication:
2011       PCA_USERNAME (required), PCA_PASSWORD (required), PCA_AUTHURL
2012       (required), PCA_USERID (optional), PCA_TENANTID (optional, but either
2013       the tenant name or tenant id must be supplied) PCA_REGIONNAME
2014       (optional), PCA_TENANTNAME (optional, but either the tenant name or
2015       tenant id must be supplied)
2016
2017       If the user was previously authenticated, the following environment
2018       variables can be used instead: PCA_PREAUTHURL (required),
2019       PCA_PREAUTHTOKEN (required)
2020
2021       If PCA_AUTHVERSION is unspecified, it will default to version 2.
2022

A NOTE ON PYDRIVE BACKEND

2024       The pydrive backend requires Python PyDrive package to be installed on
2025       the system. See REQUIREMENTS.
2026
2027       There are two ways to use PyDrive: with a regular account or with a
2028       "service account". With a service account, a separate account is
2029       created, that is only accessible with Google APIs and not a web login.
2030       With a regular account, you can store backups in your normal Google
2031       Drive.
2032
2033       To use a service account, go to the Google developers console at
2034       https://console.developers.google.com. Create a project, and make sure
2035       Drive API is enabled for the project. Under "APIs and auth", click
2036       Create New Client ID, then select Service Account with P12 key.
2037
2038       Download the .p12 key file of the account and convert it to the .pem
2039       format:
2040       openssl pkcs12 -in XXX.p12  -nodes -nocerts > pydriveprivatekey.pem
2041
2042       The content of .pem file should be passed to GOOGLE_DRIVE_ACCOUNT_KEY
2043       environment variable for authentication.
2044
2045       The email address of the account will be used as part of URL. See URL
2046       FORMAT above.
2047
2048       The alternative is to use a regular account. To do this, start as
2049       above, but when creating a new Client ID, select "Installed
2050       application" of type "Other". Create a file with the following content,
2051       and pass its filename in the GOOGLE_DRIVE_SETTINGS environment
2052       variable:
2053              client_config_backend: settings
2054              client_config:
2055                  client_id: <Client ID from developers' console>
2056                  client_secret: <Client secret from developers' console>
2057              save_credentials: True
2058              save_credentials_backend: file
2059              save_credentials_file: <filename to cache credentials>
2060              get_refresh_token: True
2061
2062       In this scenario, the username and host parts of the URL play no role;
2063       only the path matters. During the first run, you will be prompted to
2064       visit an URL in your browser to grant access to your drive. Once
2065       granted, you will receive a verification code to paste back into
2066       Duplicity. The credentials are then cached in the file references above
2067       for future use.
2068

A NOTE ON RCLONE BACKEND

2070       Rclone is a powerful command line program to sync files and directories
2071       to and from various cloud storage providers.
2072
2073   Usage
2074       Once you have configured an rclone remote via
2075
2076              rclone config
2077
2078       and successfully set up a remote (e.g. gdrive for Google Drive),
2079       assuming you can list your remote files with
2080
2081              rclone ls gdrive:mydocuments
2082
2083       you can start your backup with
2084
2085              duplicity /mydocuments rclone://gdrive:/mydocuments
2086
2087       Please note the slash after the second colon. Some storage provider
2088       will work with or without slash after colon, but some other will not.
2089       Since duplicity will complain about malformed URL if a slash is not
2090       present, always put it after the colon, and the backend will handle it
2091       for you.
2092
2093   Options
2094       Note that all rclone options can be set by env vars as well. This is
2095       properly documented here
2096
2097              https://rclone.org/docs/
2098
2099       but in a nutshell you need to take the long option name, strip the
2100       leading --, change - to _, make upper case and prepend RCLONE_. for
2101       example
2102
2103              the equivalent of '--stats 5s' would be the env var
2104              RCLONE_STATS=5s
2105

A NOTE ON SLATE BACKEND

2107       Three environment variables are used with the slate backend:
2108         1.  `SLATE_API_KEY` - Your slate API key
2109         2.  `SLATE_SSL_VERIFY` - either '1'(True) or '0'(False) for ssl
2110       verification (optional - True by default)
2111         3.  `PASSPHRASE` - your gpg passhprase for encryption (optional -
2112       will be prompted if not set or not used at all if using the `--no-
2113       encryption` parameter)
2114
2115       To use the slate backend, use the following scheme:
2116              slate://[slate-id]
2117
2118       e.g. Full backup of current directory to slate:
2119              duplicity full . "slate://6920df43-5c3w-2x7i-69aw-2390567uav75"
2120
2121       Here's a demo:
2122       https://gitlab.com/Shr1ftyy/duplicity/uploads/675664ef0eb431d14c8e20045e3fafb6/slate_demo.mp4
2123

A NOTE ON SSH BACKENDS

2125       The ssh backends support sftp and scp/ssh transport protocols.  This is
2126       a known user-confusing issue as these are fundamentally different.  If
2127       you plan to access your backend via one of those please inform yourself
2128       about the requirements for a server to support sftp or scp/ssh access.
2129       To make it even more confusing the user can choose between several ssh
2130       backends via a scheme prefix: paramiko+ (default), pexpect+, lftp+... .
2131       paramiko & pexpect support --use-scp, --ssh-askpass and --ssh-options.
2132       Only the pexpect backend allows one to define --scp-command and --sftp-
2133       command.
2134       SSH paramiko backend (default) is a complete reimplementation of ssh
2135       protocols natively in python. Advantages are speed and maintainability.
2136       Minor disadvantage is that extra packages are needed as listed in
2137       REQUIREMENTS.  In sftp (default) mode all operations are done via the
2138       according sftp commands. In scp mode ( --use-scp ) though scp access is
2139       used for put/get operations but listing is done via ssh remote shell.
2140       SSH pexpect backend is the legacy ssh backend using the command line
2141       ssh binaries via pexpect.  Older versions used scp for get and put
2142       operations and sftp for list and delete operations.  The current
2143       version uses sftp for all four supported operations, unless the --use-
2144       scp option is used to revert to old behavior.
2145       SSH lftp backend is simply there because lftp can interact with the ssh
2146       cmd line binaries.  It is meant as a last resort in case the above
2147       options fail for some reason.
2148
2149   Why use sftp instead of scp?
2150       The change to sftp was made in order to allow the remote system to
2151       chroot the backup, thus providing better security and because it does
2152       not suffer from shell quoting issues like scp.  Scp also does not
2153       support any kind of file listing, so sftp or ssh access will always be
2154       needed in addition for this backend mode to work properly. Sftp does
2155       not have these limitations but needs an sftp service running on the
2156       backend server, which is sometimes not an option.
2157

A NOTE ON SSL CERTIFICATE VERIFICATION

2159       Certificate verification as implemented right now [02.2016] only in the
2160       webdav and lftp backends. older pythons 2.7.8- and older lftp binaries
2161       need a file based database of certification authority certificates
2162       (cacert file).
2163       Newer python 2.7.9+ and recent lftp versions however support the system
2164       default certificates (usually in /etc/ssl/certs) and also giving an
2165       alternative ca cert folder via --ssl-cacert-path.
2166       The cacert file has to be a PEM formatted text file as currently
2167       provided by the CURL project. See
2168              http://curl.haxx.se/docs/caextract.html
2169       After creating/retrieving a valid cacert file you should copy it to
2170       either
2171              ~/.duplicity/cacert.pem
2172              ~/duplicity_cacert.pem
2173              /etc/duplicity/cacert.pem
2174       Duplicity searches it there in the same order and will fail if it can't
2175       find it.  You can however specify the option --ssl-cacert-file <file>
2176       to point duplicity to a copy in a different location.
2177       Finally there is the --ssl-no-check-certificate option to disable
2178       certificate verification altogether, in case some ssl library is
2179       missing or verification is not wanted. Use it with care, as even with
2180       self signed servers manually providing the private ca certificate is
2181       definitely the safer option.
2182

A NOTE ON SWIFT (OPENSTACK OBJECT STORAGE) ACCESS

2184       Swift is the OpenStack Object Storage service.
2185       The backend requires python-switclient to be installed on the system.
2186       python-keystoneclient is also needed to use OpenStack's Keystone
2187       Identity service.  See REQUIREMENTS.
2188
2189       It uses following environment variables for authentication:
2190
2191              SWIFT_USERNAME (required),
2192              SWIFT_PASSWORD (required),
2193              SWIFT_AUTHURL (required),
2194              SWIFT_TENANTID or SWIFT_TENANTNAME (required with
2195              SWIFT_AUTHVERSION=2, can alternatively be defined in
2196              SWIFT_USERNAME like e.g. SWIFT_USERNAME="tenantname:user"),
2197              SWIFT_PROJECT_ID or SWIFT_PROJECT_NAME (required with
2198              SWIFT_AUTHVERSION=3),
2199              SWIFT_USERID (optional, required only for IBM Bluemix
2200              ObjectStorage),
2201              SWIFT_REGIONNAME (optional).
2202
2203       If the user was previously authenticated, the following environment
2204       variables can be used instead: SWIFT_PREAUTHURL (required),
2205       SWIFT_PREAUTHTOKEN (required)
2206
2207       If SWIFT_AUTHVERSION is unspecified, it will default to version 1.
2208

A NOTE ON SYMMETRIC ENCRYPTION AND SIGNING

2210       Signing and symmetrically encrypt at the same time with the gpg binary
2211       on the command line, as used within duplicity, is a specifically
2212       challenging issue.  Tests showed that the following combinations proved
2213       working.
2214       1. Setup gpg-agent properly. Use the option --use-agent and enter both
2215       passphrases (symmetric and sign key) in the gpg-agent's dialog.
2216       2. Use a PASSPHRASE for symmetric encryption of your choice but the
2217       signing key has an empty passphrase.
2218       3. The used PASSPHRASE for symmetric encryption and the passphrase of
2219       the signing key are identical.
2220

A NOTE ON THE XORRISO BACKEND

2222       This backend uses the xorriso tool to append backups to optical media
2223       or ISO9660 images.
2224
2225       Use the following environment variables for more settings:
2226              XORRISO_PATH, set an alternative path to the xorriso executable
2227              XORRISO_WRITE_SPEED, specify the speed for writing to the
2228              optical disc. One of [min, max]
2229              XORRISO_ASSERT_VOLID, specify the required volume ID of the ISO.
2230              Aborts when the actual volume ID is different.
2231              XORRISO_ARGS, for expert use only. Pass arbitrary arguments to
2232              xorriso. Example: XORRISO_ARGS='-md5 all'
2233

KNOWN ISSUES / BUGS

2235       Hard links currently unsupported (they will be treated as non-linked
2236       regular files).
2237
2238       Bad signatures will be treated as empty instead of logging appropriate
2239       error message.
2240

OPERATION AND DATA FORMATS

2242       This section describes duplicity's basic operation and the format of
2243       its data files.  It should not necessary to read this section to use
2244       duplicity.
2245
2246       The files used by duplicity to store backup data are tarfiles in GNU
2247       tar format.  They can be produced independently by rdiffdir(1).  For
2248       incremental backups, new files are saved normally in the tarfile.  But
2249       when a file changes, instead of storing a complete copy of the file,
2250       only a diff is stored, as generated by rdiff(1).  If a file is deleted,
2251       a 0 length file is stored in the tar.  It is possible to restore a
2252       duplicity archive "manually" by using tar and then cp, rdiff, and rm as
2253       necessary.  These duplicity archives have the extension difftar.
2254
2255       Both full and incremental backup sets have the same format.  In effect,
2256       a full backup set is an incremental one generated from an empty
2257       signature (see below).  The files in full backup sets will start with
2258       duplicity-full while the incremental sets start with duplicity-inc.
2259       When restoring, duplicity applies patches in order, so deleting, for
2260       instance, a full backup set may make related incremental backup sets
2261       unusable.
2262
2263       In order to determine which files have been deleted, and to calculate
2264       diffs for changed files, duplicity needs to process information about
2265       previous sessions.  It stores this information in the form of tarfiles
2266       where each entry's data contains the signature (as produced by rdiff)
2267       of the file instead of the file's contents.  These signature sets have
2268       the extension sigtar.
2269
2270       Signature files are not required to restore a backup set, but without
2271       an up-to-date signature, duplicity cannot append an incremental backup
2272       to an existing archive.
2273
2274       To save bandwidth, duplicity generates full signature sets and
2275       incremental signature sets.  A full signature set is generated for each
2276       full backup, and an incremental one for each incremental backup.  These
2277       start with duplicity-full-signatures and duplicity-new-signatures
2278       respectively. These signatures will be stored both locally and
2279       remotely.  The remote signatures will be encrypted if encryption is
2280       enabled.  The local signatures will not be encrypted and stored in the
2281       archive dir (see --archive-dir ).
2282

REQUIREMENTS

2284       Duplicity requires a POSIX-like operating system with a python
2285       interpreter version 2.6+ installed.  It is best used under GNU/Linux.
2286
2287       Some backends also require additional components (probably available as
2288       packages for your specific platform):
2289       Amazon Drive backend
2290              python-requests - http://python-requests.org
2291              python-requests-oauthlib - https://github.com/requests/requests-
2292              oauthlib
2293       azure backend (Azure Storage Blob Service)
2294              Microsoft Azure Storage Blobs client library for Python -
2295              https://pypi.org/project/azure-storage-blob/
2296       boto backend (S3 Amazon Web Services, Google Cloud Storage) (legacy)
2297              boto version 2.49 (2018/07/11) - http://github.com/boto/boto
2298       boto3 backend (S3 Amazon Web Services, Google Cloud Storage) (default)
2299              boto3 version 1.x - https://github.com/boto/boto3
2300       box backend (box.com)
2301              boxsdk - https://github.com/box/box-python-sdk
2302       cfpyrax backend (Rackspace Cloud) and hubic backend (hubic.com)
2303              Rackspace CloudFiles Pyrax API -
2304              http://docs.rackspace.com/sdks/guide/content/python.html
2305       dpbx backend (Dropbox)
2306              Dropbox Python SDK -
2307              https://www.dropbox.com/developers/reference/sdk
2308       gdocs gdata backend (legacy)
2309              Google Data APIs Python Client Library -
2310              http://code.google.com/p/gdata-python-client/
2311       gdocs pydrive backend(default)
2312              see pydrive backend
2313       gio backend (Gnome VFS API)
2314              PyGObject - http://live.gnome.org/PyGObject
2315              D-Bus (dbus)- http://www.freedesktop.org/wiki/Software/dbus
2316       lftp backend (needed for ftp, ftps, fish [over ssh] - also supports
2317       sftp, webdav[s])
2318              LFTP Client - http://lftp.yar.ru/
2319       MEGA backend (only works for accounts created prior to November 2018)
2320       (mega.nz)
2321              megatools client - https://github.com/megous/megatools
2322       MEGA v2 and v3 backend (works for all MEGA accounts) (mega.nz)
2323              MEGAcmd client - https://mega.nz/cmd
2324       multi backend
2325              Multi -- store to more than one backend
2326              (also see A NOTE ON MULTI BACKEND ) below.
2327       ncftp backend (ftp, select via ncftp+ftp://)
2328              NcFTP - http://www.ncftp.com/
2329       OneDrive backend (Microsoft OneDrive)
2330              python-requests-oauthlib - https://github.com/requests/requests-
2331              oauthlib
2332       Par2 Wrapper Backend
2333              par2cmdline - http://parchive.sourceforge.net/
2334       pydrive backend
2335              PyDrive -- a wrapper library of google-api-python-client -
2336              https://pypi.python.org/pypi/PyDrive
2337              (also see A NOTE ON PYDRIVE BACKEND ) below.
2338       rclone backend
2339              rclone - https://rclone.org/
2340       rsync backend
2341              rsync client binary - http://rsync.samba.org/
2342       ssh paramiko backend (default)
2343              paramiko (SSH2 for python) -
2344              http://pypi.python.org/pypi/paramiko (downloads);
2345              http://github.com/paramiko/paramiko (project page)
2346              pycrypto (Python Cryptography Toolkit) -
2347              http://www.dlitz.net/software/pycrypto/
2348       ssh pexpect backend(legacy)
2349              sftp/scp client binaries OpenSSH - http://www.openssh.com/
2350              Python pexpect module -
2351              http://pexpect.sourceforge.net/pexpect.html
2352       swift backend (OpenStack Object Storage)
2353              Python swiftclient module - https://github.com/openstack/python-
2354              swiftclient/
2355              Python keystoneclient module -
2356              https://github.com/openstack/python-keystoneclient/
2357       webdav backend
2358              certificate authority database file for ssl certificate
2359              verification of HTTPS connections -
2360              http://curl.haxx.se/docs/caextract.html
2361              (also see A NOTE ON SSL CERTIFICATE VERIFICATION).
2362              Python kerberos module for kerberos authentication -
2363              https://github.com/02strich/pykerberos
2364       MediaFire backend
2365              MediaFire Python Open SDK -
2366              https://pypi.python.org/pypi/mediafire/
2367       xorriso backend
2368              xorriso - https://www.gnu.org/software/xorriso/
2369

AUTHOR

2371       Original Author - Ben Escoto <bescoto@stanford.edu>
2372       Current Maintainer - Kenneth Loafman <kenneth@loafman.com>
2373       Continuous Contributors
2374              Edgar Soldin, Mike Terry
2375       Most backends were contributed individually.  Information about their
2376       authorship may be found in the according file's header.
2377       Also we'd like to thank everybody posting issues to the mailing list or
2378       on launchpad, sending in patches or contributing otherwise. Duplicity
2379       wouldn't be as stable and useful if it weren't for you.
2380       A special thanks goes to rsync.net, a Cloud Storage provider with
2381       explicit support for duplicity, for several monetary donations and for
2382       providing a special "duplicity friends" rate for their offsite backup
2383       service.  Email info@rsync.net for details.
2384

SEE ALSO

2386       rdiffdir(1), python(1), rdiff(1), rdiff-backup(1).
2387
2388
2389
2390Version 1.2.3                    May 09, 2023                     DUPLICITY(1)
Impressum