1DUPLICITY(1) User Manuals DUPLICITY(1)
2
3
4
6 duplicity - Encrypted incremental backup to local or remote storage.
7
8
10 For detailed descriptions for each command see chapter ACTIONS.
11
12 duplicity [full|incremental] [options] source_directory target_url
13
14 duplicity verify [options] [--compare-data] [--file-to-restore
15 <relpath>] [--time time] source_url target_directory
16
17 duplicity collection-status [options] [--file-changed <relpath>]
18 target_url
19
20 duplicity list-current-files [options] [--time time] target_url
21
22 duplicity [restore] [options] [--file-to-restore <relpath>] [--time
23 time] source_url target_directory
24
25 duplicity remove-older-than <time> [options] [--force] target_url
26
27 duplicity remove-all-but-n-full <count> [options] [--force] target_url
28
29 duplicity remove-all-inc-of-but-n-full <count> [options] [--force]
30 target_url
31
32 duplicity cleanup [options] [--force] target_url
33
34 duplicity replicate [options] [--time time] source_url target_url
35
36
38 Duplicity incrementally backs up files and folders into tar-format
39 volumes encrypted with GnuPG and places them to a remote (or local)
40 storage backend. See chapter URL FORMAT for a list of all supported
41 backends and how to address them. Because duplicity uses librsync,
42 incremental backups are space efficient and only record the parts of
43 files that have changed since the last backup. Currently duplicity
44 supports deleted files, full Unix permissions, uid/gid, directories,
45 symbolic links, fifos, etc., but not hard links.
46
47 If you are backing up the root directory /, remember to --exclude
48 /proc, or else duplicity will probably crash on the weird stuff in
49 there.
50
51
53 Here is an example of a backup, using sftp to back up /home/me to
54 some_dir on the other.host machine:
55
56 duplicity /home/me sftp://uid@other.host/some_dir
57
58 If the above is run repeatedly, the first will be a full backup, and
59 subsequent ones will be incremental. To force a full backup, use the
60 full action:
61
62 duplicity full /home/me sftp://uid@other.host/some_dir
63
64 or enforcing a full every other time via --full-if-older-than <time> ,
65 e.g. a full every month:
66
67 duplicity --full-if-older-than 1M /home/me
68 sftp://uid@other.host/some_dir
69
70 Now suppose we accidentally delete /home/me and want to restore it the
71 way it was at the time of last backup:
72
73 duplicity sftp://uid@other.host/some_dir /home/me
74
75 Duplicity enters restore mode because the URL comes before the local
76 directory. If we wanted to restore just the file "Mail/article" in
77 /home/me as it was three days ago into /home/me/restored_file:
78
79 duplicity -t 3D --file-to-restore Mail/article
80 sftp://uid@other.host/some_dir /home/me/restored_file
81
82 The following command compares the latest backup with the current
83 files:
84
85 duplicity verify sftp://uid@other.host/some_dir /home/me
86
87 Finally, duplicity recognizes several include/exclude options. For
88 instance, the following will backup the root directory, but exclude
89 /mnt, /tmp, and /proc:
90
91 duplicity --exclude /mnt --exclude /tmp --exclude /proc /
92 file:///usr/local/backup
93
94 Note that in this case the destination is the local directory
95 /usr/local/backup. The following will backup only the /home and /etc
96 directories under root:
97
98 duplicity --include /home --include /etc --exclude '**' /
99 file:///usr/local/backup
100
101 Duplicity can also access a repository via ftp. If a user name is
102 given, the environment variable FTP_PASSWORD is read to determine the
103 password:
104
105 FTP_PASSWORD=mypassword duplicity /local/dir
106 ftp://user@other.host/some_dir
107
108
110 Duplicity knows action commands, which can be finetuned with options.
111 The actions for backup (full,incr) and restoration (restore) can as
112 well be left out as duplicity detects in what mode it should switch to
113 by the order of target URL and local folder. If the target URL comes
114 before the local folder a restore is in order, is the local folder
115 before target URL then this folder is about to be backed up to the
116 target URL.
117 If a backup is in order and old signatures can be found duplicity
118 automatically performs an incremental backup.
119
120 Note: The following explanations explain some but not all options that
121 can be used in connection with that action command. Consult the
122 OPTIONS section for more detailed informations.
123
124
125 full <folder> <url>
126 Perform a full backup. A new backup chain is started even if
127 signatures are available for an incremental backup.
128
129
130 incr <folder> <url>
131 If this is requested an incremental backup will be performed.
132 Duplicity will abort if no old signatures can be found.
133
134
135 verify [--compare-data] [--time <time>] [--file-to-restore <rel_path>]
136 <url> <local_path>
137 Verify tests the integrity of the backup archives at the remote
138 location by downloading each file and checking both that it can
139 restore the archive and that the restored file matches the
140 signature of that file stored in the backup, i.e. compares the
141 archived file with its hash value from archival time. Verify
142 does not actually restore and will not overwrite any local
143 files. Duplicity will exit with a non-zero error level if any
144 files do not match the signature stored in the archive for that
145 file. On verbosity level 4 or higher, it will log a message for
146 each file that differs from the stored signature. Files must be
147 downloaded to the local machine in order to compare them.
148 Verify does not compare the backed-up version of the file to the
149 current local copy of the files unless the --compare-data option
150 is used (see below).
151 The --file-to-restore option restricts verify to that file or
152 folder. The --time option allows to select a backup to verify.
153 The --compare-data option enables data comparison (see below).
154
155
156 collection-status [--file-changed <relpath>]<url>
157 Summarize the status of the backup repository by printing the
158 chains and sets found, and the number of volumes in each.
159
160
161 list-current-files [--time <time>] <url>
162 Lists the files contained in the most current backup or backup
163 at time. The information will be extracted from the signature
164 files, not the archive data itself. Thus the whole archive does
165 not have to be downloaded, but on the other hand if the archive
166 has been deleted or corrupted, this command will not detect it.
167
168
169 restore [--file-to-restore <relpath>] [--time <time>] <url>
170 <target_folder>
171 You can restore the full monty or selected folders/files from a
172 specific time. Use the relative path as it is printed by list-
173 current-files. Usually not needed as duplicity enters restore
174 mode when it detects that the URL comes before the local folder.
175
176
177 remove-older-than <time> [--force] <url>
178 Delete all backup sets older than the given time. Old backup
179 sets will not be deleted if backup sets newer than time depend
180 on them. See the TIME FORMATS section for more information.
181 Note, this action cannot be combined with backup or other
182 actions, such as cleanup. Note also that --force will be needed
183 to delete the files instead of just listing them.
184
185
186 remove-all-but-n-full <count> [--force] <url>
187 Delete all backups sets that are older than the count:th last
188 full backup (in other words, keep the last count full backups
189 and associated incremental sets). count must be larger than
190 zero. A value of 1 means that only the single most recent backup
191 chain will be kept. Note that --force will be needed to delete
192 the files instead of just listing them.
193
194
195 remove-all-inc-of-but-n-full <count> [--force] <url>
196 Delete incremental sets of all backups sets that are older than
197 the count:th last full backup (in other words, keep only old
198 full backups and not their increments). count must be larger
199 than zero. A value of 1 means that only the single most recent
200 backup chain will be kept intact. Note that --force will be
201 needed to delete the files instead of just listing them.
202
203
204 cleanup [--force] <url>
205 Delete the extraneous duplicity files on the given backend.
206 Non-duplicity files, or files in complete data sets will not be
207 deleted. This should only be necessary after a duplicity
208 session fails or is aborted prematurely. Note that --force will
209 be needed to delete the files instead of just listing them.
210
211
212 replicate [--time time] <source_url> <target_url>
213 Replicate backup sets from source to target backend. Files will
214 be (re)-encrypted and (re)-compressed depending on normal
215 backend options. Signatures and volumes will not get recomputed,
216 thus options like --volsize or --max-blocksize have no effect.
217 When --time time is given, only backup sets older than time will
218 be replicated.
219
220
222 --allow-source-mismatch
223 Do not abort on attempts to use the same archive dir or remote
224 backend to back up different directories. duplicity will tell
225 you if you need this switch.
226
227
228 --archive-dir path
229 The archive directory. NOTE: This option changed in 0.6.0. The
230 archive directory is now necessary in order to manage
231 persistence for current and future enhancements. As such, this
232 option is now used only to change the location of the archive
233 directory. The archive directory should not be deleted, or
234 duplicity will have to recreate it from the remote repository
235 (which may require decrypting the backup contents).
236
237 When backing up or restoring, this option specifies that the
238 local archive directory is to be created in path. If the
239 archive directory is not specified, the default will be to
240 create the archive directory in ~/.cache/duplicity/.
241
242 The archive directory can be shared between backups to multiple
243 targets, because a subdirectory of the archive dir is used for
244 individual backups (see --name ).
245
246 The combination of archive directory and backup name must be
247 unique in order to separate the data of different backups.
248
249 The interaction between the --archive-dir and the --name options
250 allows for four possible combinations for the location of the
251 archive dir:
252
253
254 1. neither specified (default)
255 ~/.cache/duplicity/hash-of-url
256
257 2. --archive-dir=/arch, no --name
258 /arch/hash-of-url
259
260 3. no --archive-dir, --name=foo
261 ~/.cache/duplicity/foo
262
263 4. --archive-dir=/arch, --name=foo
264 /arch/foo
265
266
267 --asynchronous-upload
268 (EXPERIMENTAL) Perform file uploads asynchronously in the
269 background, with respect to volume creation. This means that
270 duplicity can upload a volume while, at the same time, preparing
271 the next volume for upload. The intended end-result is a faster
272 backup, because the local CPU and your bandwidth can be more
273 consistently utilized. Use of this option implies additional
274 need for disk space in the temporary storage location; rather
275 than needing to store only one volume at a time, enough storage
276 space is required to store two volumes.
277
278
279 --backend-retry-delay number
280 Specifies the number of seconds that duplicity waits after an
281 error has occured before attempting to repeat the operation.
282
283
284
285 --cf-backend backend
286 Allows the explicit selection of a cloudfiles backend. Defaults
287 to pyrax. Alternatively you might choose cloudfiles.
288
289
290 --b2-hide-files
291 Causes Duplicity to hide files in B2 instead of deleting them.
292 Useful in combination with B2's lifecycle rules.
293
294
295 --compare-data
296 Enable data comparison of regular files on action verify. This
297 conducts a verify as described above to verify the integrity of
298 the backup archives, but additionally compares restored files to
299 those in target_directory. Duplicity will not replace any files
300 in target_directory. Duplicity will exit with a non-zero error
301 level if the files do not correctly verify or if any files from
302 the archive differ from those in target_directory. On verbosity
303 level 4 or higher, it will log a message for each file that
304 differs from its equivalent in target_directory.
305
306
307 --copy-links
308 Resolve symlinks during backup. Enabling this will resolve &
309 back up the symlink's file/folder data instead of the symlink
310 itself, potentially increasing the size of the backup.
311
312
313 --dry-run
314 Calculate what would be done, but do not perform any backend
315 actions
316
317
318 --encrypt-key key-id
319 When backing up, encrypt to the given public key, instead of
320 using symmetric (traditional) encryption. Can be specified
321 multiple times. The key-id can be given in any of the formats
322 supported by GnuPG; see gpg(1), section "HOW TO SPECIFY A USER
323 ID" for details.
324
325
326
327 --encrypt-secret-keyring filename
328 This option can only be used with --encrypt-key, and changes the
329 path to the secret keyring for the encrypt key to filename This
330 keyring is not used when creating a backup. If not specified,
331 the default secret keyring is used which is usually located at
332 .gnupg/secring.gpg
333
334
335 --encrypt-sign-key key-id
336 Convenience parameter. Same as --encrypt-key key-id --sign-key
337 key-id.
338
339
340 --exclude shell_pattern
341 Exclude the file or files matched by shell_pattern. If a
342 directory is matched, then files under that directory will also
343 be matched. See the FILE SELECTION section for more
344 information.
345
346
347 --exclude-device-files
348 Exclude all device files. This can be useful for
349 security/permissions reasons or if duplicity is not handling
350 device files correctly.
351
352
353 --exclude-filelist filename
354 Excludes the files listed in filename, with each line of the
355 filelist interpreted according to the same rules as --include
356 and --exclude. See the FILE SELECTION section for more
357 information.
358
359
360 --exclude-if-present filename
361 Exclude directories if filename is present. Allows the user to
362 specify folders that they do not wish to backup by adding a
363 specified file (e.g. ".nobackup") instead of maintaining a
364 comprehensive exclude/include list.
365
366
367 --exclude-older-than time
368 Exclude any files whose modification date is earlier than the
369 specified time. This can be used to produce a partial backup
370 that contains only recently changed files. See the TIME FORMATS
371 section for more information.
372
373
374 --exclude-other-filesystems
375 Exclude files on file systems (identified by device number)
376 other than the file system the root of the source directory is
377 on.
378
379
380 --exclude-regexp regexp
381 Exclude files matching the given regexp. Unlike the --exclude
382 option, this option does not match files in a directory it
383 matches. See the FILE SELECTION section for more information.
384
385
386 --file-prefix, --file-prefix-manifest, --file-prefix-archive, --file-
387 prefix-signature
388 Adds a prefix to all files, manifest files, archive files,
389 and/or signature files.
390
391 The same set of prefixes must be passed in on backup and
392 restore.
393
394 If both global and type-specific prefixes are set, global prefix
395 will go before type-specific prefixes.
396
397 See also A NOTE ON FILENAME PREFIXES
398
399
400 --file-to-restore path
401 This option may be given in restore mode, causing only path to
402 be restored instead of the entire contents of the backup
403 archive. path should be given relative to the root of the
404 directory backed up.
405
406
407 --full-if-older-than time
408 Perform a full backup if an incremental backup is requested, but
409 the latest full backup in the collection is older than the given
410 time. See the TIME FORMATS section for more information.
411
412
413 --force
414 Proceed even if data loss might result. Duplicity will let the
415 user know when this option is required.
416
417
418 --ftp-passive
419 Use passive (PASV) data connections. The default is to use
420 passive, but to fallback to regular if the passive connection
421 fails or times out.
422
423
424 --ftp-regular
425 Use regular (PORT) data connections.
426
427
428 --gio Use the GIO backend and interpret any URLs as GIO would.
429
430
431 --hidden-encrypt-key key-id
432 Same as --encrypt-key, but it hides user's key id from encrypted
433 file. It uses the gpg's --hidden-recipient command to obfuscate
434 the owner of the backup. On restore, gpg will automatically try
435 all available secret keys in order to decrypt the backup. See
436 gpg(1) for more details.
437
438
439
440 --ignore-errors
441 Try to ignore certain errors if they happen. This option is only
442 intended to allow the restoration of a backup in the face of
443 certain problems that would otherwise cause the backup to fail.
444 It is not ever recommended to use this option unless you have a
445 situation where you are trying to restore from backup and it is
446 failing because of an issue which you want duplicity to ignore.
447 Even then, depending on the issue, this option may not have an
448 effect.
449
450 Please note that while ignored errors will be logged, there will
451 be no summary at the end of the operation to tell you what was
452 ignored, if anything. If this is used for emergency restoration
453 of data, it is recommended that you run the backup in such a way
454 that you can revisit the backup log (look for lines containing
455 the string IGNORED_ERROR).
456
457 If you ever have to use this option for reasons that are not
458 understood or understood but not your own responsibility, please
459 contact duplicity maintainers. The need to use this option under
460 production circumstances would normally be considered a bug.
461
462
463 --imap-full-address email_address
464 The full email address of the user name when logging into an
465 imap server. If not supplied just the user name part of the
466 email address is used.
467
468
469 --imap-mailbox option
470 Allows you to specify a different mailbox. The default is
471 "INBOX". Other languages may require a different mailbox than
472 the default.
473
474
475 --gpg-binary file_path
476 Allows you to force duplicity to use file_path as gpg command
477 line binary. Can be an absolute or relative file path or a file
478 name. Default value is 'gpg'. The binary will be localized via
479 the PATH environment variable.
480
481
482 --gpg-options options
483 Allows you to pass options to gpg encryption. The options list
484 should be of the form "--opt1 --opt2=parm" where the string is
485 quoted and the only spaces allowed are between options.
486
487
488 --include shell_pattern
489 Similar to --exclude but include matched files instead. Unlike
490 --exclude, this option will also match parent directories of
491 matched files (although not necessarily their contents). See
492 the FILE SELECTION section for more information.
493
494
495 --include-filelist filename
496 Like --exclude-filelist, but include the listed files instead.
497 See the FILE SELECTION section for more information.
498
499
500 --include-regexp regexp
501 Include files matching the regular expression regexp. Only
502 files explicitly matched by regexp will be included by this
503 option. See the FILE SELECTION section for more information.
504
505
506 --log-fd number
507 Write specially-formatted versions of output messages to the
508 specified file descriptor. The format used is designed to be
509 easily consumable by other programs.
510
511
512 --log-file filename
513 Write specially-formatted versions of output messages to the
514 specified file. The format used is designed to be easily
515 consumable by other programs.
516
517
518 --max-blocksize number
519 determines the number of the blocks examined for changes during
520 the diff process. For files < 1MB the blocksize is a constant
521 of 512. For files over 1MB the size is given by:
522
523 file_blocksize = int((file_len / (2000 * 512)) * 512)
524 return min(file_blocksize, config.max_blocksize)
525
526 where config.max_blocksize defaults to 2048. If you specify a
527 larger max_blocksize, your difftar files will be larger, but
528 your sigtar files will be smaller. If you specify a smaller
529 max_blocksize, the reverse occurs. The --max-blocksize option
530 should be in multiples of 512.
531
532
533 --name symbolicname
534 Set the symbolic name of the backup being operated on. The
535 intent is to use a separate name for each logically distinct
536 backup. For example, someone may use "home_daily_s3" for the
537 daily backup of a home directory to Amazon S3. The structure of
538 the name is up to the user, it is only important that the names
539 be distinct. The symbolic name is currently only used to affect
540 the expansion of --archive-dir , but may be used for additional
541 features in the future. Users running more than one distinct
542 backup are encouraged to use this option.
543
544 If not specified, the default value is a hash of the backend
545 URL.
546
547
548 --no-compression
549 Do not use GZip to compress files on remote system.
550
551
552 --no-encryption
553 Do not use GnuPG to encrypt files on remote system.
554
555
556 --no-print-statistics
557 By default duplicity will print statistics about the current
558 session after a successful backup. This switch disables that
559 behavior.
560
561
562 --null-separator
563 Use nulls (\0) instead of newlines (\n) as line separators,
564 which may help when dealing with filenames containing newlines.
565 This affects the expected format of the files specified by the
566 --{include|exclude}-filelist switches as well as the format of
567 the directory statistics file.
568
569
570 --numeric-owner
571 On restore always use the numeric uid/gid from the archive and
572 not the archived user/group names, which is the default
573 behaviour. Recommended for restoring from live cds which might
574 have the users with identical names but different uids/gids.
575
576
577 --do-not-restore-ownership
578 Ignores the uid/gid from the archive and keeps the current
579 user's one. Recommended for restoring data to mounted
580 filesystem which do not support Unix ownership or when root
581 privileges are not available.
582
583
584 --num-retries number
585 Number of retries to make on errors before giving up.
586
587
588 --old-filenames
589 Use the old filename format (incompatible with Windows/Samba)
590 rather than the new filename format.
591
592
593 --par2-options options
594 Verbatim options to pass to par2.
595
596
597 --par2-redundancy percent
598 Adjust the level of redundancy in percent for Par2 recovery
599 files (default 10%).
600
601
602 --progress
603 When selected, duplicity will output the current upload progress
604 and estimated upload time. To annotate changes, it will perform
605 a first dry-run before a full or incremental, and then runs the
606 real operation estimating the real upload progress.
607
608
609 --progress-rate number
610 Sets the update rate at which duplicity will output the upload
611 progress messages (requires --progress option). Default is to
612 print the status each 3 seconds.
613
614
615 --rename <original path> <new path>
616 Treats the path orig in the backup as if it were the path new.
617 Can be passed multiple times. An example:
618
619 duplicity restore --rename Documents/metal Music/metal
620 sftp://uid@other.host/some_dir /home/me
621
622
623 --rsync-options options
624 Allows you to pass options to the rsync backend. The options
625 list should be of the form "opt1=parm1 opt2=parm2" where the
626 option string is quoted and the only spaces allowed are between
627 options. The option string will be passed verbatim to rsync,
628 after any internally generated option designating the remote
629 port to use. Here is a possibly useful example:
630
631 duplicity --rsync-options="--partial-dir=.rsync-partial"
632 /home/me rsync://uid@other.host/some_dir
633
634
635 --s3-european-buckets
636 When using the Amazon S3 backend, create buckets in Europe
637 instead of the default (requires --s3-use-new-style ). Also see
638 the EUROPEAN S3 BUCKETS section.
639
640 This option does not apply when using the newer boto3 backend,
641 which does not create buckets.
642
643 See also A NOTE ON AMAZON S3 below.
644
645
646 --s3-unencrypted-connection
647 Don't use SSL for connections to S3.
648
649 This may be much faster, at some cost to confidentiality.
650
651 With this option, anyone who can observe traffic between your
652 computer and S3 will be able to tell: that you are using
653 Duplicity, the name of the bucket, your AWS Access Key ID, the
654 increment dates and the amount of data in each increment.
655
656 This option affects only the connection, not the GPG encryption
657 of the backup increment files. Unless that is disabled, an
658 observer will not be able to see the file names or contents.
659
660 This option is not available when using the newer boto3 backend.
661
662 See also A NOTE ON AMAZON S3 below.
663
664
665 --s3-use-new-style
666 When operating on Amazon S3 buckets, use new-style subdomain
667 bucket addressing. This is now the preferred method to access
668 Amazon S3, but is not backwards compatible if your bucket name
669 contains upper-case characters or other characters that are not
670 valid in a hostname.
671
672 This option has no effect when using the newer boto3 backend,
673 which will always use new style subdomain bucket naming.
674
675 See also A NOTE ON AMAZON S3 below.
676
677
678 --s3-use-rrs
679 Store volumes using Reduced Redundancy Storage when uploading to
680 Amazon S3. This will lower the cost of storage but also lower
681 the durability of stored volumes to 99.99% instead the
682 99.999999999% durability offered by Standard Storage on S3.
683
684
685 --s3-use-ia
686 Store volumes using Standard - Infrequent Access when uploading
687 to Amazon S3. This storage class has a lower storage cost but a
688 higher per-request cost, and the storage cost is calculated
689 against a 30-day storage minimum. According to Amazon, this
690 storage is ideal for long-term file storage, backups, and
691 disaster recovery.
692
693
694 --s3-use-onezone-ia
695 Store volumes using One Zone - Infrequent Access when uploading
696 to Amazon S3. This storage is similar to Standard - Infrequent
697 Access, but only stores object data in one Availability Zone.
698
699
700 --s3-use-glacier
701 Store volumes using Glacier S3 when uploading to Amazon S3. This
702 storage class has a lower cost of storage but a higher per-
703 request cost along with delays of up to 12 hours from the time
704 of retrieval request. This storage cost is calculated against a
705 90-day storage minimum. According to Amazon this storage is
706 ideal for data archiving and long-term backup offering
707 99.999999999% durability. To restore a backup you will have to
708 manually migrate all data stored on AWS Glacier back to Standard
709 S3 and wait for AWS to complete the migration. Notice:
710 Duplicity will store the manifest.gpg files from full and
711 incremental backups on AWS S3 standard storage to allow quick
712 retrieval for later incremental backups, all other data is
713 stored in S3 Glacier.
714
715
716 --s3-use-deep-archive
717 Store volumes using Glacier Deep Archive S3 when uploading to
718 Amazon S3. This storage class has a lower cost of storage but a
719 higher per-request cost along with delays of up to 48 hours from
720 the time of retrieval request. This storage cost is calculated
721 against a 180-day storage minimum. According to Amazon this
722 storage is ideal for data archiving and long-term backup
723 offering 99.999999999% durability. To restore a backup you will
724 have to manually migrate all data stored on AWS Glacier Deep
725 Archive back to Standard S3 and wait for AWS to complete the
726 migration. Notice: Duplicity will store the manifest.gpg files
727 from full and incremental backups on AWS S3 standard storage to
728 allow quick retrieval for later incremental backups, all other
729 data is stored in S3 Glacier Deep Archive.
730
731 Glacier Deep Archive is only available when using the newer
732 boto3 backend.
733
734
735 --s3-use-multiprocessing
736 Allow multipart volumne uploads to S3 through multiprocessing.
737 This option requires Python 2.6 and can be used to make uploads
738 to S3 more efficient. If enabled, files duplicity uploads to S3
739 will be split into chunks and uploaded in parallel. Useful if
740 you want to saturate your bandwidth or if large files are
741 failing during upload.
742
743 This has no effect when using the newer boto3 backend. Boto3
744 always attempts to multiprocessing when it is believed it will
745 be more efficient.
746
747 See also A NOTE ON AMAZON S3 below.
748
749
750 --s3-use-server-side-encryption
751 Allow use of server side encryption in S3
752
753
754 --s3-multipart-chunk-size
755 Chunk size (in MB) used for S3 multipart uploads. Make this
756 smaller than --volsize to maximize the use of your bandwidth.
757 For example, a chunk size of 10MB with a volsize of 30MB will
758 result in 3 chunks per volume upload.
759
760 See also A NOTE ON AMAZON S3 below.
761
762
763 --s3-multipart-max-procs
764 Specify the maximum number of processes to spawn when performing
765 a multipart upload to S3. By default, this will choose the
766 number of processors detected on your system (e.g. 4 for a
767 4-core system). You can adjust this number as required to ensure
768 you don't overload your system while maximizing the use of your
769 bandwidth.
770
771 This has no effect when using the newer boto3 backend.
772
773 See also A NOTE ON AMAZON S3 below.
774
775
776 --s3-multipart-max-timeout
777 You can control the maximum time (in seconds) a multipart upload
778 can spend on uploading a single chunk to S3. This may be useful
779 if you find your system hanging on multipart uploads or if you'd
780 like to control the time variance when uploading to S3 to ensure
781 you kill connections to slow S3 endpoints.
782
783 This has no effect when using the newer boto3 backend.
784
785 See also A NOTE ON AMAZON S3 below.
786
787
788 --s3-region-name
789 Specifies the region of the S3 storage.
790
791 This is currently only used in the newer boto3 backend.
792
793
794 --s3-endpoint-url
795 Specifies the endpoint URL of the S3 storage.
796
797 This is currently only used in the newer boto3 backend.
798
799
800 --azure-blob-tier
801 Standard storage tier used for backup files (Hot|Cool|Archive).
802
803
804 --azure-max-single-put-size
805 Specify the number of the largest supported upload size where
806 the Azure library makes only one put call. If the content size
807 is known and below this value the Azure library will only
808 perform one put request to upload one block. The number is
809 expected to be in bytes.
810
811
812 --azure-max-block-size
813 Specify the number for the block size used by the Azure library
814 to upload blobs if it is split into multiple blocks. The
815 maximum block size the service supports is 104857600 (100MiB)
816 and the default is 4194304 (4MiB)
817
818
819 --azure-max-connections
820 Specify the number of maximum connections to transfer one blob
821 to Azure blob size exceeds 64MB. The default values is 2.
822
823
824 --scp-command command
825 (only ssh pexpect backend with --use-scp enabled) The command
826 will be used instead of "scp" to send or receive files. To list
827 and delete existing files, the sftp command is used.
828 See also A NOTE ON SSH BACKENDS section SSH pexpect backend.
829
830
831 --sftp-command command
832 (only ssh pexpect backend) The command will be used instead of
833 "sftp".
834 See also A NOTE ON SSH BACKENDS section SSH pexpect backend.
835
836
837 --short-filenames
838 If this option is specified, the names of the files duplicity
839 writes will be shorter (about 30 chars) but less understandable.
840 This may be useful when backing up to MacOS or another OS or FS
841 that doesn't support long filenames.
842
843
844 --sign-key key-id
845 This option can be used when backing up, restoring or verifying.
846 When backing up, all backup files will be signed with keyid key.
847 When restoring, duplicity will signal an error if any remote
848 file is not signed with the given key-id. The key-id can be
849 given in any of the formats supported by GnuPG; see gpg(1),
850 section "HOW TO SPECIFY A USER ID" for details. Should be
851 specified only once because currently only one signing key is
852 supported. Last entry overrides all other entries.
853 See also A NOTE ON SYMMETRIC ENCRYPTION AND SIGNING
854
855
856 --ssh-askpass
857 Tells the ssh backend to prompt the user for the remote system
858 password, if it was not defined in target url and no
859 FTP_PASSWORD env var is set. This password is also used for
860 passphrase-protected ssh keys.
861
862
863 --ssh-options options
864 Allows you to pass options to the ssh backend. Can be specified
865 multiple times or as a space separated options list. The
866 options list should be of the form "-oOpt1='parm1'
867 -oOpt2='parm2'" where the option string is quoted and the only
868 spaces allowed are between options. The option string will be
869 passed verbatim to both scp and sftp, whose command line syntax
870 differs slightly hence the options should therefore be given in
871 the long option format described in ssh_config(5).
872
873 example of a list:
874
875 duplicity --ssh-options="-oProtocol=2
876 -oIdentityFile='/my/backup/id'" /home/me
877 scp://user@host/some_dir
878
879 example with multiple parameters:
880
881 duplicity --ssh-options="-oProtocol=2" --ssh-
882 options="-oIdentityFile='/my/backup/id'" /home/me
883 scp://user@host/some_dir
884
885 NOTE: The ssh paramiko backend currently supports only the -i or
886 -oIdentityFile or -oUserKnownHostsFile or -oGlobalKnownHostsFile
887 settings. If needed provide more host specific options via
888 ssh_config file.
889
890
891 --ssl-cacert-file file
892 (only webdav & lftp backend) Provide a cacert file for ssl
893 certificate verification.
894 See also A NOTE ON SSL CERTIFICATE VERIFICATION.
895
896
897 --ssl-cacert-path path/to/certs/
898 (only webdav backend and python 2.7.9+ OR lftp+webdavs and a
899 recent lftp) Provide a path to a folder containing cacert files
900 for ssl certificate verification.
901 See also A NOTE ON SSL CERTIFICATE VERIFICATION.
902
903
904 --ssl-no-check-certificate
905 (only webdav & lftp backend) Disable ssl certificate
906 verification.
907 See also A NOTE ON SSL CERTIFICATE VERIFICATION.
908
909
910 --swift-storage-policy
911 Use this storage policy when operating on Swift containers.
912 See also A NOTE ON SWIFT (OPENSTACK OBJECT STORAGE) ACCESS.
913
914
915 --metadata-sync-mode mode
916 This option defaults to 'partial', but you can set it to 'full'
917 Use 'partial' to avoid syncing metadata for backup chains that
918 you are not going to use. This saves time when restoring for
919 the first time, and lets you restore an old backup that was
920 encrypted with a different passphrase by supplying only the
921 target passphrase.
922 Use 'full' to sync metadata for all backup chains on the remote.
923
924
925 --tempdir directory
926 Use this existing directory for duplicity temporary files
927 instead of the system default, which is usually the /tmp
928 directory. This option supersedes any environment variable.
929 See also ENVIRONMENT VARIABLES.
930
931
932 -ttime, --time time, --restore-time time
933 Specify the time from which to restore or list files.
934
935
936 --time-separator char
937 Use char as the time separator in filenames instead of colon
938 (":").
939
940
941 --timeout seconds
942 Use seconds as the socket timeout value if duplicity begins to
943 timeout during network operations. The default is 30 seconds.
944
945
946 --use-agent
947 If this option is specified, then --use-agent is passed to the
948 GnuPG encryption process and it will try to connect to gpg-agent
949 before it asks for a passphrase for --encrypt-key or --sign-key
950 if needed.
951 Note: Contrary to previous versions of duplicity, this option
952 will also be honored by GnuPG 2 and newer versions. If GnuPG 2
953 is in use, duplicity passes the option --pinentry-mode=loopback
954 to the the gpg process unless --use-agent is specified on the
955 duplicity command line. This has the effect that GnuPG 2 uses
956 the agent only if --use-agent is given, just like GnuPG 1.
957
958
959 --verbosity level, -vlevel
960 Specify output verbosity level (log level). Named levels and
961 corresponding values are 0 Error, 2 Warning, 4 Notice (default),
962 8 Info, 9 Debug (noisiest).
963 level may also be
964 a character: e, w, n, i, d
965 a word: error, warning, notice, info, debug
966
967 The options -v4, -vn and -vnotice are functionally equivalent,
968 as are the mixed/upper-case versions -vN, -vNotice and -vNOTICE.
969
970
971 --version
972 Print duplicity's version and quit.
973
974
975 --volsize number
976 Change the volume size to number MB. Default is 200MB.
977
978
980 TMPDIR, TEMP, TMP
981 In decreasing order of importance, specifies the directory to
982 use for temporary files (inherited from Python's tempfile
983 module). Eventually the option --tempdir supercedes any of
984 these.
985
986 FTP_PASSWORD
987 Supported by most backends which are password capable. More
988 secure than setting it in the backend url (which might be
989 readable in the operating systems process listing to other users
990 on the same machine).
991
992 PASSPHRASE
993 This passphrase is passed to GnuPG. If this is not set, the user
994 will be prompted for the passphrase.
995
996 SIGN_PASSPHRASE
997 The passphrase to be used for --sign-key. If ommitted and sign
998 key is also one of the keys to encrypt against PASSPHRASE will
999 be reused instead. Otherwise, if passphrase is needed but not
1000 set the user will be prompted for it.
1001
1002 Other environment variables may be used to configure specific
1003 backends. See the notes for the particular backend.
1004
1005
1007 Duplicity uses the URL format (as standard as possible) to define data
1008 locations. The generic format for a URL is:
1009
1010 scheme://[user[:password]@]host[:port]/[/]path
1011
1012 It is not recommended to expose the password on the command line since
1013 it could be revealed to anyone with permissions to do process listings,
1014 it is permitted however. Consider setting the environment variable
1015 FTP_PASSWORD instead, which is used by most, if not all backends,
1016 regardless of it's name.
1017
1018 In protocols that support it, the path may be preceded by a single
1019 slash, '/path', to represent a relative path to the target home
1020 directory, or preceded by a double slash, '//path', to represent an
1021 absolute filesystem path.
1022
1023 Note:
1024 Scheme (protocol) access may be provided by more than one
1025 backend. In case the default backend is buggy or simply not
1026 working in a specific case it might be worth trying an
1027 alternative implementation. Alternative backends can be
1028 selected by prefixing the scheme with the name of the
1029 alternative backend e.g. ncftp+ftp:// and are mentioned below
1030 the scheme's syntax summary.
1031
1032
1033 Formats of each of the URL schemes follow:
1034
1035
1036 Amazon Drive Backend
1037
1038 ad://some_dir
1039
1040 See also A NOTE ON AMAZON DRIVE
1041
1042 Azure
1043
1044 azure://container-name
1045
1046 See also A NOTE ON AZURE ACCESS
1047
1048 B2
1049
1050 b2://account_id[:application_key]@bucket_name/[folder/]
1051
1052 Box
1053
1054 box:///some_dir[?config=path_to_config]
1055
1056 See also A NOTE ON BOX ACCESS
1057
1058 Cloud Files (Rackspace)
1059
1060 cf+http://container_name
1061
1062 See also A NOTE ON CLOUD FILES ACCESS
1063
1064 Dropbox
1065
1066 dpbx:///some_dir
1067
1068 Make sure to read A NOTE ON DROPBOX ACCESS first!
1069
1070 Local file path
1071
1072 file://[relative|/absolute]/local/path
1073
1074 FISH (Files transferred over Shell protocol) over ssh
1075
1076 fish://user[:password]@other.host[:port]/[relative|/absolute]_path
1077
1078 FTP
1079
1080 ftp[s]://user[:password]@other.host[:port]/some_dir
1081
1082 NOTE: use lftp+, ncftp+ prefixes to enforce a specific backend,
1083 default is lftp+ftp://...
1084
1085 Google Docs
1086
1087 gdocs://user[:password]@other.host/some_dir
1088
1089 NOTE: use pydrive+, gdata+ prefixes to enforce a specific
1090 backend, default is pydrive+gdocs://...
1091
1092 Google Cloud Storage
1093
1094 gs://bucket[/prefix]
1095
1096 HSI
1097
1098 hsi://user[:password]@other.host/some_dir
1099
1100 hubiC
1101
1102 cf+hubic://container_name
1103
1104 See also A NOTE ON HUBIC
1105
1106 IMAP email storage
1107
1108 imap[s]://user[:password]@host.com[/from_address_prefix]
1109
1110 See also A NOTE ON IMAP
1111
1112 MEGA.nz cloud storage (only works for accounts created prior to
1113 November 2018, uses "megatools")
1114
1115 mega://user[:password]@mega.nz/some_dir
1116
1117 NOTE: if not given in the URL, relies on password being stored
1118 within $HOME/.megarc (as used by the "megatools" utilities)
1119
1120 MEGA.nz cloud storage (works for all MEGA accounts, uses "MEGAcmd"
1121 tools)
1122
1123 megav2://user[:password]@mega.nz/some_dir
1124 megav3://user[:password]@mega.nz/some_dir[?no_logout=1] (For
1125 latest MEGAcmd)
1126
1127 NOTE: despite "MEGAcmd" no longer uses a configuration file, for
1128 convenience storing the user password this backend searches it
1129 in the $HOME/.megav2rc file (same syntax as the old
1130 $HOME/.megarc)
1131 [Login]
1132 Username = MEGA_USERNAME
1133 Password = MEGA_PASSWORD
1134
1135 OneDrive Backend
1136
1137 onedrive://some_dir
1138
1139 Par2 Wrapper Backend
1140
1141 par2+scheme://[user[:password]@]host[:port]/[/]path
1142
1143 See also A NOTE ON PAR2 WRAPPER BACKEND
1144
1145 Rclone Backend
1146
1147 rclone://remote:/some_dir
1148
1149 See also A NOTE ON RCLONE BACKEND
1150
1151 Rsync via daemon
1152
1153 rsync://user[:password]@host.com[:port]::[/]module/some_dir
1154
1155 Rsync over ssh (only key auth)
1156
1157 rsync://user@host.com[:port]/[relative|/absolute]_path
1158
1159 S3 storage (Amazon)
1160
1161 s3://host[:port]/bucket_name[/prefix]
1162 s3+http://bucket_name[/prefix]
1163 defaults to the legacy boto backend based on boto v2 (last
1164 update 2018/07)
1165 alternatively try the newer boto3+s3://bucket_name[/prefix]
1166
1167 For details see A NOTE ON AMAZON S3 and see also A NOTE ON
1168 EUROPEAN S3 BUCKETS below.
1169
1170 SCP/SFTP access
1171
1172 scp://.. or
1173 sftp://user[:password]@other.host[:port]/[relative|/absolute]_path
1174
1175 defaults are paramiko+scp:// and paramiko+sftp://
1176 alternatively try pexpect+scp://, pexpect+sftp://, lftp+sftp://
1177 See also --ssh-askpass, --ssh-options and A NOTE ON SSH
1178 BACKENDS.
1179
1180 Swift (Openstack)
1181
1182 swift://container_name[/prefix]
1183
1184 See also A NOTE ON SWIFT (OPENSTACK OBJECT STORAGE) ACCESS
1185
1186 Public Cloud Archive (OVH)
1187
1188 pca://container_name[/prefix]
1189
1190 See also A NOTE ON PCA ACCESS
1191
1192 Tahoe-LAFS
1193
1194 tahoe://alias/directory
1195
1196 WebDAV
1197
1198 webdav[s]://user[:password]@other.host[:port]/some_dir
1199
1200 alternatively try lftp+webdav[s]://
1201
1202 pydrive
1203
1204 pydrive://<service account' email
1205 address>@developer.gserviceaccount.com/some_dir
1206
1207 See also A NOTE ON PYDRIVE BACKEND below.
1208
1209 gdrive
1210
1211 gdrive://<service account' email
1212 address>@developer.gserviceaccount.com/some_dir
1213
1214 See also A NOTE ON GDRIVE BACKEND below.
1215
1216 multi
1217
1218 multi:///path/to/config.json
1219
1220 See also A NOTE ON MULTI BACKEND below.
1221
1222 MediaFire
1223
1224 mf://user[:password]@mediafire.com/some_dir
1225
1226 See also A NOTE ON MEDIAFIRE BACKEND below.
1227
1228
1230 duplicity uses time strings in two places. Firstly, many of the files
1231 duplicity creates will have the time in their filenames in the w3
1232 datetime format as described in a w3 note at http://www.w3.org/TR/NOTE-
1233 datetime. Basically they look like "2001-07-15T04:09:38-07:00", which
1234 means what it looks like. The "-07:00" section means the time zone is
1235 7 hours behind UTC.
1236
1237 Secondly, the -t, --time, and --restore-time options take a time
1238 string, which can be given in any of several formats:
1239
1240 1. the string "now" (refers to the current time)
1241
1242 2. a sequences of digits, like "123456890" (indicating the time in
1243 seconds after the epoch)
1244
1245 3. A string like "2002-01-25T07:00:00+02:00" in datetime format
1246
1247 4. An interval, which is a number followed by one of the characters
1248 s, m, h, D, W, M, or Y (indicating seconds, minutes, hours,
1249 days, weeks, months, or years respectively), or a series of such
1250 pairs. In this case the string refers to the time that preceded
1251 the current time by the length of the interval. For instance,
1252 "1h78m" indicates the time that was one hour and 78 minutes ago.
1253 The calendar here is unsophisticated: a month is always 30 days,
1254 a year is always 365 days, and a day is always 86400 seconds.
1255
1256 5. A date format of the form YYYY/MM/DD, YYYY-MM-DD, MM/DD/YYYY, or
1257 MM-DD-YYYY, which indicates midnight on the day in question,
1258 relative to the current time zone settings. For instance,
1259 "2002/3/5", "03-05-2002", and "2002-3-05" all mean March 5th,
1260 2002.
1261
1262
1264 When duplicity is run, it searches through the given source directory
1265 and backs up all the files specified by the file selection system. The
1266 file selection system comprises a number of file selection conditions,
1267 which are set using one of the following command line options:
1268 --exclude
1269 --exclude-device-files
1270 --exclude-if-present
1271 --exclude-filelist
1272 --exclude-regexp
1273 --include
1274 --include-filelist
1275 --include-regexp
1276 Each file selection condition either matches or doesn't match a given
1277 file. A given file is excluded by the file selection system exactly
1278 when the first matching file selection condition specifies that the
1279 file be excluded; otherwise the file is included.
1280
1281 For instance,
1282
1283 duplicity --include /usr --exclude /usr /usr
1284 scp://user@host/backup
1285
1286 is exactly the same as
1287
1288 duplicity /usr scp://user@host/backup
1289
1290 because the include and exclude directives match exactly the same
1291 files, and the --include comes first, giving it precedence. Similarly,
1292
1293 duplicity --include /usr/local/bin --exclude /usr/local /usr
1294 scp://user@host/backup
1295
1296 would backup the /usr/local/bin directory (and its contents), but not
1297 /usr/local/doc.
1298
1299 The include, exclude, include-filelist, and exclude-filelist options
1300 accept some extended shell globbing patterns. These patterns can
1301 contain *, **, ?, and [...] (character ranges). As in a normal shell,
1302 * can be expanded to any string of characters not containing "/", ?
1303 expands to any character except "/", and [...] expands to a single
1304 character of those characters specified (ranges are acceptable). The
1305 new special pattern, **, expands to any string of characters whether or
1306 not it contains "/". Furthermore, if the pattern starts with
1307 "ignorecase:" (case insensitive), then this prefix will be removed and
1308 any character in the string can be replaced with an upper- or lowercase
1309 version of itself.
1310
1311 Remember that you may need to quote these characters when typing them
1312 into a shell, so the shell does not interpret the globbing patterns
1313 before duplicity sees them.
1314
1315 The --exclude pattern option matches a file if:
1316
1317 1. pattern can be expanded into the file's filename, or
1318 2. the file is inside a directory matched by the option.
1319
1320 Conversely, the --include pattern matches a file if:
1321
1322 1. pattern can be expanded into the file's filename, or
1323 2. the file is inside a directory matched by the option, or
1324 3. the file is a directory which contains a file matched by the
1325 option.
1326
1327 For example,
1328
1329 --exclude /usr/local
1330
1331 matches e.g. /usr/local, /usr/local/lib, and /usr/local/lib/netscape.
1332 It is the same as --exclude /usr/local --exclude '/usr/local/**'.
1333
1334 On the other hand
1335
1336 --include /usr/local
1337
1338 specifies that /usr, /usr/local, /usr/local/lib, and
1339 /usr/local/lib/netscape (but not /usr/doc) all be backed up. Thus you
1340 don't have to worry about including parent directories to make sure
1341 that included subdirectories have somewhere to go.
1342
1343 Finally,
1344
1345 --include ignorecase:'/usr/[a-z0-9]foo/*/**.py'
1346
1347 would match a file like /usR/5fOO/hello/there/world.py. If it did
1348 match anything, it would also match /usr. If there is no existing file
1349 that the given pattern can be expanded into, the option will not match
1350 /usr alone.
1351
1352 The --include-filelist, and --exclude-filelist, options also introduce
1353 file selection conditions. They direct duplicity to read in a text
1354 file (either ASCII or UTF-8), each line of which is a file
1355 specification, and to include or exclude the matching files. Lines are
1356 separated by newlines or nulls, depending on whether the --null-
1357 separator switch was given. Each line in the filelist will be
1358 interpreted as a globbing pattern the way --include and --exclude
1359 options are interpreted, except that lines starting with "+ " are
1360 interpreted as include directives, even if found in a filelist
1361 referenced by --exclude-filelist. Similarly, lines starting with "- "
1362 exclude files even if they are found within an include filelist.
1363
1364 For example, if file "list.txt" contains the lines:
1365
1366 /usr/local
1367 - /usr/local/doc
1368 /usr/local/bin
1369 + /var
1370 - /var
1371
1372 then --include-filelist list.txt would include /usr, /usr/local, and
1373 /usr/local/bin. It would exclude /usr/local/doc,
1374 /usr/local/doc/python, etc. It would also include /usr/local/man, as
1375 this is included within /user/local. Finally, it is undefined what
1376 happens with /var. A single file list should not contain conflicting
1377 file specifications.
1378
1379 Each line in the filelist will also be interpreted as a globbing
1380 pattern the way --include and --exclude options are interpreted. For
1381 instance, if the file "list.txt" contains the lines:
1382
1383 dir/foo
1384 + dir/bar
1385 - **
1386
1387 Then --include-filelist list.txt would be exactly the same as
1388 specifying --include dir/foo --include dir/bar --exclude ** on the
1389 command line.
1390
1391 Finally, the --include-regexp and --exclude-regexp options allow files
1392 to be included and excluded if their filenames match a python regular
1393 expression. Regular expression syntax is too complicated to explain
1394 here, but is covered in Python's library reference. Unlike the
1395 --include and --exclude options, the regular expression options don't
1396 match files containing or contained in matched files. So for instance
1397
1398 --include '[0-9]{7}(?!foo)'
1399
1400 matches any files whose full pathnames contain 7 consecutive digits
1401 which aren't followed by 'foo'. However, it wouldn't match /home even
1402 if /home/ben/1234567 existed.
1403
1404
1406 1. The API Keys used for Amazon Drive have not been granted
1407 production limits. Amazon do not say what the development
1408 limits are and are not replying to to requests to whitelist
1409 duplicity. A related tool, acd_cli, was demoted to development
1410 limits, but continues to work fine except for cases of excessive
1411 usage. If you experience throttling and similar issues with
1412 Amazon Drive using this backend, please report them to the
1413 mailing list.
1414
1415 2. If you previously used the acd+acdcli backend, it is strongly
1416 recommended to update to the ad backend instead, since it
1417 interfaces directly with Amazon Drive. You will need to setup
1418 the OAuth once again, but can otherwise keep your backups and
1419 config.
1420
1421
1423 When backing up to Amazon S3, two backend implementations are
1424 available. The schemes "s3" and "s3+http" are implemented using the
1425 older boto library, which has been deprecated and is no longer
1426 supported. The "boto3+s3" scheme is based on the newer boto3 library.
1427 This new backend fixes several known limitations in the older backend,
1428 which have crept in as Amazon S3 has evolved while the deprecated boto
1429 library has not kept up.
1430
1431 The boto3 backend should behave largely the same as the older S3
1432 backend, but there are some differences in the handling of some of the
1433 "S3" options. Additionally, there are some compatibility differences
1434 with the new backed. Because of these reasons, both backends have been
1435 retained for the time being. See the documentation for specific
1436 options regarding differences related to each backend.
1437
1438 The boto3 backend does not support bucket creation. This is a
1439 deliberate choice which simplifies the code, and side steps problems
1440 related to region selection. Additionally, it is probably not a good
1441 practice to give your backup role bucket creation rights. In most
1442 cases the role used for backups should probably be limited to specific
1443 buckets.
1444
1445 The boto3 backend only supports newer domain style buckets. Amazon is
1446 moving to deprecate the older bucket style, so migration is
1447 recommended. Use the older s3 backend for compatibility with backups
1448 stored in buckets using older naming conventions.
1449
1450 The boto3 backend does not currently support initiating restores from
1451 the glacier storage class. When restoring a backup from glacier or
1452 glacier deep archive, the backup files must first be restored out of
1453 band. There are multiple options when restoring backups from cold
1454 storage, which vary in both cost and speed. See Amazon's documentation
1455 for details.
1456
1457
1459 The Azure backend requires the Microsoft Azure Storage Blobs client
1460 library for Python to be installed on the system. See REQUIREMENTS.
1461
1462 It uses the environment variable AZURE_CONNECTION_STRING (required).
1463 This string contains all necessary information such as Storage Account
1464 name and the key for authentication. You can find it under Access Keys
1465 for the storage account.
1466
1467 Duplicity will take care to create the container when performing the
1468 backup. Do not create it manually before.
1469
1470 A container name (as given as the backup url) must be a valid DNS name,
1471 conforming to the following naming rules:
1472
1473
1474 1. Container names must start with a letter or number, and
1475 can contain only letters, numbers, and the dash (-)
1476 character.
1477
1478 2. Every dash (-) character must be immediately preceded and
1479 followed by a letter or number; consecutive dashes are
1480 not permitted in container names.
1481
1482 3. All letters in a container name must be lowercase.
1483
1484 4. Container names must be from 3 through 63 characters
1485 long.
1486
1487 These rules come from Azure; see https://docs.microsoft.com/en-
1488 us/rest/api/storageservices/naming-and-referencing-
1489 containers--blobs--and-metadata
1490
1491
1493 The box backend requires boxsdk with jwt support to be installed on the
1494 system. See REQUIREMENTS.
1495
1496 It uses the environment variable BOX_CONFIG_PATH (optional). This
1497 string contains the path to box custom app's config.json. Either this
1498 environment variable or the config query parameter in the url need to
1499 be specified, if both are specified, query paramter takes precedence.
1500
1501
1502 Create a Box custom app
1503 In order to use box backend, user need to create a box custom app in
1504 the box developer console (https://app.box.com/developers/console).
1505
1506 After create a new custom app, please make sure it is configured as
1507 follow:
1508
1509
1510 1. Choose "App Access Only" for "App Access Level"
1511
1512 2. Check "Write all files and folders stored in Box"
1513
1514 3. Generate a Public/Private Keypair
1515
1516 The user also need to grant the created custom app permission in the
1517 admin console (https://app.box.com/master/custom-apps) by clicking the
1518 "+" button and enter the client_id which can be found on the custom
1519 app's configuration page.
1520
1521
1523 Pyrax is Rackspace's next-generation Cloud management API, including
1524 Cloud Files access. The cfpyrax backend requires the pyrax library to
1525 be installed on the system. See REQUIREMENTS.
1526
1527 Cloudfiles is Rackspace's now deprecated implementation of OpenStack
1528 Object Storage protocol. Users wishing to use Duplicity with Rackspace
1529 Cloud Files should migrate to the new Pyrax plugin to ensure support.
1530
1531 The backend requires python-cloudfiles to be installed on the system.
1532 See REQUIREMENTS.
1533
1534 It uses three environment variables for authentification:
1535 CLOUDFILES_USERNAME (required), CLOUDFILES_APIKEY (required),
1536 CLOUDFILES_AUTHURL (optional)
1537
1538 If CLOUDFILES_AUTHURL is unspecified it will default to the value
1539 provided by python-cloudfiles, which points to rackspace, hence this
1540 value must be set in order to use other cloud files providers.
1541
1542
1544 1. First of all Dropbox backend requires valid authentication
1545 token. It should be passed via DPBX_ACCESS_TOKEN environment
1546 variable.
1547 To obtain it please create 'Dropbox API' application at:
1548 https://www.dropbox.com/developers/apps/create
1549 Then visit app settings and just use 'Generated access token'
1550 under OAuth2 section.
1551 Alternatively you can let duplicity generate access token
1552 itself. In such case temporary export DPBX_APP_KEY ,
1553 DPBX_APP_SECRET using values from app settings page and run
1554 duplicity interactively.
1555 It will print the URL that you need to open in the browser to
1556 obtain OAuth2 token for the application. Just follow on-screen
1557 instructions and then put generated token to DPBX_ACCESS_TOKEN
1558 variable. Once done, feel free to unset DPBX_APP_KEY and
1559 DPBX_APP_SECRET
1560
1561
1562 2. "some_dir" must already exist in the Dropbox folder. Depending
1563 on access token kind it may be:
1564 Full Dropbox: path is absolute and starts from 'Dropbox'
1565 root folder.
1566 App Folder: path is related to application folder.
1567 Dropbox client will show it in ~/Dropbox/Apps/<app-name>
1568
1569
1570 3. When using Dropbox for storage, be aware that all files,
1571 including the ones in the Apps folder, will be synced to all
1572 connected computers. You may prefer to use a separate Dropbox
1573 account specially for the backups, and not connect any computers
1574 to that account. Alternatively you can configure selective sync
1575 on all computers to avoid syncing of backup files
1576
1577
1579 Amazon S3 provides the ability to choose the location of a bucket upon
1580 its creation. The purpose is to enable the user to choose a location
1581 which is better located network topologically relative to the user,
1582 because it may allow for faster data transfers.
1583
1584 duplicity will create a new bucket the first time a bucket access is
1585 attempted. At this point, the bucket will be created in Europe if
1586 --s3-european-buckets was given. For reasons having to do with how the
1587 Amazon S3 service works, this also requires the use of the --s3-use-
1588 new-style option. This option turns on subdomain based bucket
1589 addressing in S3. The details are beyond the scope of this man page,
1590 but it is important to know that your bucket must not contain upper
1591 case letters or any other characters that are not valid parts of a
1592 hostname. Consequently, for reasons of backwards compatibility, use of
1593 subdomain based bucket addressing is not enabled by default.
1594
1595 Note that you will need to use --s3-use-new-style for all operations on
1596 European buckets; not just upon initial creation.
1597
1598 You only need to use --s3-european-buckets upon initial creation, but
1599 you may may use it at all times for consistency.
1600
1601 Further note that when creating a new European bucket, it can take a
1602 while before the bucket is fully accessible. At the time of this
1603 writing it is unclear to what extent this is an expected feature of
1604 Amazon S3, but in practice you may experience timeouts, socket errors
1605 or HTTP errors when trying to upload files to your newly created
1606 bucket. Give it a few minutes and the bucket should function normally.
1607
1608
1610 Filename prefixes can be used in multi backend with mirror mode to
1611 define affinity rules. They can also be used in conjunction with S3
1612 lifecycle rules to transition archive files to Glacier, while keeping
1613 metadata (signature and manifest files) on S3.
1614
1615 Duplicity does not require access to archive files except when
1616 restoring from backup.
1617
1618
1620 Support for Google Cloud Storage relies on its Interoperable Access,
1621 which must be enabled for your account. Once enabled, you can generate
1622 Interoperable Storage Access Keys and pass them to duplicity via the
1623 GS_ACCESS_KEY_ID and GS_SECRET_ACCESS_KEY environment variables.
1624 Alternatively, you can run gsutil config -a to have the Google Cloud
1625 Storage utility populate the ~/.boto configuration file.
1626
1627 Enable Interoperable Access:
1628 https://code.google.com/apis/console#:storage
1629 Create Access Keys:
1630 https://code.google.com/apis/console#:storage:legacy
1631
1632
1634 The hubic backend requires the pyrax library to be installed on the
1635 system. See REQUIREMENTS. You will need to set your credentials for
1636 hubiC in a file called ~/.hubic_credentials, following this pattern:
1637
1638 [hubic]
1639 email = your_email
1640 password = your_password
1641 client_id = api_client_id
1642 client_secret = api_secret_key
1643 redirect_uri = http://localhost/
1644
1645
1647 An IMAP account can be used as a target for the upload. The userid may
1648 be specified and the password will be requested.
1649
1650 The from_address_prefix may be specified (and probably should be). The
1651 text will be used as the "From" address in the IMAP server. Then on a
1652 restore (or list) command the from_address_prefix will distinguish
1653 between different backups.
1654
1655
1657 The multi backend allows duplicity to combine the storage available in
1658 more than one backend store (e.g., you can store across a google drive
1659 account and a onedrive account to get effectively the combined storage
1660 available in both). The URL path specifies a JSON formated config file
1661 containing a list of the backends it will use. The URL may also specify
1662 "query" parameters to configure overall behavior. Each element of the
1663 list must have a "url" element, and may also contain an optional
1664 "description" and an optional "env" list of environment variables used
1665 to configure that backend.
1666
1667 Query Parameters
1668 Query parameters come after the file URL in standard HTTP format for
1669 example:
1670 multi:///path/to/config.json?mode=mirror&onfail=abort
1671 multi:///path/to/config.json?mode=stripe&onfail=continue
1672 multi:///path/to/config.json?onfail=abort&mode=stripe
1673 multi:///path/to/config.json?onfail=abort
1674 Order does not matter, however unrecognized parameters are considered
1675 an error.
1676
1677 mode=stripe
1678 This mode (the default) performs round-robin access to the list
1679 of backends. In this mode, all backends must be reliable as a
1680 loss of one means a loss of one of the archive files.
1681
1682 mode=mirror
1683 This mode accesses backends as a RAID1-store, storing every file
1684 in every backend and reading files from the first-successful
1685 backend. A loss of any backend should result in no failure.
1686 Note that backends added later will only get new files and may
1687 require a manual sync with one of the other operating ones.
1688
1689 onfail=continue
1690 This setting (the default) continues all write operations in as
1691 best-effort. Any failure results in the next backend tried.
1692 Failure is reported only when all backends fail a given
1693 operation with the error result from the last failure.
1694
1695 onfail=abort
1696 This setting considers any backend write failure as a
1697 terminating condition and reports the error. Data reading and
1698 listing operations are independent of this and will try with the
1699 next backend on failure.
1700
1701 JSON File Example
1702 [
1703 {
1704 "description": "a comment about the backend"
1705 "url": "abackend://myuser@domain.com/backup",
1706 "env": [
1707 {
1708 "name" : "MYENV",
1709 "value" : "xyz"
1710 },
1711 {
1712 "name" : "FOO",
1713 "value" : "bar"
1714 }
1715 ],
1716 "prefixes": ["prefix1_", "prefix2_"]
1717 },
1718 {
1719 "url": "file:///path/to/dir"
1720 }
1721 ]
1722
1723
1725 Par2 Wrapper Backend can be used in combination with all other backends
1726 to create recovery files. Just add par2+ before a regular scheme (e.g.
1727 par2+ftp://user@host/dir or par2+s3+http://bucket_name ). This will
1728 create par2 recovery files for each archive and upload them all to the
1729 wrapped backend.
1730
1731 Before restoring, archives will be verified. Corrupt archives will be
1732 repaired on the fly if there are enough recovery blocks available.
1733
1734 Use --par2-redundancy percent to adjust the size (and redundancy) of
1735 recovery files in percent.
1736
1737
1739 The pydrive backend requires Python PyDrive package to be installed on
1740 the system. See REQUIREMENTS.
1741
1742 There are two ways to use PyDrive: with a regular account or with a
1743 "service account". With a service account, a separate account is
1744 created, that is only accessible with Google APIs and not a web login.
1745 With a regular account, you can store backups in your normal Google
1746 Drive.
1747
1748 To use a service account, go to the Google developers console at
1749 https://console.developers.google.com. Create a project, and make sure
1750 Drive API is enabled for the project. Under "APIs and auth", click
1751 Create New Client ID, then select Service Account with P12 key.
1752
1753 Download the .p12 key file of the account and convert it to the .pem
1754 format:
1755 openssl pkcs12 -in XXX.p12 -nodes -nocerts > pydriveprivatekey.pem
1756
1757 The content of .pem file should be passed to GOOGLE_DRIVE_ACCOUNT_KEY
1758 environment variable for authentification.
1759
1760 The email address of the account will be used as part of URL. See URL
1761 FORMAT above.
1762
1763 The alternative is to use a regular account. To do this, start as
1764 above, but when creating a new Client ID, select "Installed
1765 application" of type "Other". Create a file with the following content,
1766 and pass its filename in the GOOGLE_DRIVE_SETTINGS environment
1767 variable:
1768
1769 client_config_backend: settings
1770 client_config:
1771 client_id: <Client ID from developers' console>
1772 client_secret: <Client secret from developers' console>
1773 save_credentials: True
1774 save_credentials_backend: file
1775 save_credentials_file: <filename to cache credentials>
1776 get_refresh_token: True
1777
1778 In this scenario, the username and host parts of the URL play no role;
1779 only the path matters. During the first run, you will be prompted to
1780 visit an URL in your browser to grant access to your drive. Once
1781 granted, you will receive a verification code to paste back into
1782 Duplicity. The credentials are then cached in the file references above
1783 for future use.
1784
1785
1787 GDrive: is a rewritten PyDrive: backend with less dependencies, and a
1788 simpler setup - it uses the JSON keys downloaded directly from Google
1789 Cloud Console.
1790
1791 Note Google has 2 drive methods, `Shared(previously Team) Drives` and
1792 `My Drive`, both can be shared but require different addressing
1793
1794 For a Google Shared Drives folder
1795
1796 Share Drive ID specified as a query parameter, driveID, in the backend
1797 URL. Example:
1798 gdrive://developer.gserviceaccount.com/target-
1799 folder/?driveID=<SHARED DRIVE ID>
1800
1801 For a Google My Drive based shared folder
1802
1803 MyDrive folder ID specified as a query parameter, myDriveFolderID, in
1804 the backend URL Example
1805 export GOOGLE_SERVICE_ACCOUNT_URL=<serviceaccount-
1806 name>@<serviceaccount-name>.iam.gserviceaccount.com
1807 gdrive://${GOOGLE_SERVICE_ACCOUNT_URL}/<target-folder-name-in-
1808 myDriveFolder>?myDriveFolderID=<google-myDrive-folder-id>
1809
1810
1811 There are also two ways to authenticate to use GDrive: with a regular
1812 account or with a "service account". With a service account, a separate
1813 account is created, that is only accessible with Google APIs and not a
1814 web login. With a regular account, you can store backups in your
1815 normal Google Drive.
1816
1817 To use a service account, go to the Google developers console at
1818 https://console.developers.google.com. Create a project, and make sure
1819 Drive API is enabled for the project. In the "Credentials" section,
1820 click "Create credentials", then select Service Account with JSON key.
1821
1822 The GOOGLE_SERVICE_JSON_FILE environment variable needs to contain the
1823 path to the JSON file on duplicity invocation.
1824
1825 export GOOGLE_SERVICE_JSON_FILE=<path-to-serviceaccount-
1826 credentials.json>
1827
1828
1829 The alternative is to use a regular account. To do this, start as
1830 above, but when creating a new Client ID, select "Create OAuth client
1831 ID", with application type of "Desktop app". Download the
1832 client_secret.json file for the new client, and set the
1833 GOOGLE_CLIENT_SECRET_JSON_FILE environment variable to the path to this
1834 file, and GOOGLE_CREDENTIALS_FILE to a path to a file where duplicity
1835 will keep the authentication token - this location must be writable.
1836
1837 During the first run, you will be prompted to visit an URL in your
1838 browser to grant access to your drive. Once granted, you will receive a
1839 verification code to paste back into Duplicity. The credentials are
1840 then cached in the file references above for future use.
1841
1842 As a sanity check, GDrive checks the host and username from the URL
1843 against the JSON key, and refuses to proceed if the addresses do not
1844 match. Either the email (for the service accounts) or Client ID (for
1845 regular OAuth accounts) must be present in the URL. See URL FORMAT
1846 above.
1847
1848
1850 Rclone is a powerful command line program to sync files and directories
1851 to and from various cloud storage providers.
1852
1853 Once you have configured an rclone remote via
1854
1855 rclone config
1856
1857 and successfully set up a remote (e.g. gdrive for Google Drive),
1858 assuming you can list your remote files with
1859
1860 rclone ls gdrive:mydocuments
1861
1862 you can start your backup with
1863
1864 duplicity /mydocuments rclone://gdrive:/mydocuments
1865
1866 Please note the slash after the second colon. Some storage provider
1867 will work with or without slash after colon, but some other will not.
1868 Since duplicity will complain about malformed URL if a slash is not
1869 present, always put it after the colon, and the backend will handle it
1870 for you.
1871
1872
1874 The ssh backends support sftp and scp/ssh transport protocols. This is
1875 a known user-confusing issue as these are fundamentally different. If
1876 you plan to access your backend via one of those please inform yourself
1877 about the requirements for a server to support sftp or scp/ssh access.
1878 To make it even more confusing the user can choose between several ssh
1879 backends via a scheme prefix: paramiko+ (default), pexpect+, lftp+... .
1880 paramiko & pexpect support --use-scp, --ssh-askpass and --ssh-options.
1881 Only the pexpect backend allows to define --scp-command and --sftp-
1882 command.
1883
1884 SSH paramiko backend (default) is a complete reimplementation of ssh
1885 protocols natively in python. Advantages are speed and maintainability.
1886 Minor disadvantage is that extra packages are needed as listed in
1887 REQUIREMENTS. In sftp (default) mode all operations are done via the
1888 according sftp commands. In scp mode ( --use-scp ) though scp access is
1889 used for put/get operations but listing is done via ssh remote shell.
1890
1891 SSH pexpect backend is the legacy ssh backend using the command line
1892 ssh binaries via pexpect. Older versions used scp for get and put
1893 operations and sftp for list and delete operations. The current
1894 version uses sftp for all four supported operations, unless the --use-
1895 scp option is used to revert to old behavior.
1896
1897 SSH lftp backend is simply there because lftp can interact with the ssh
1898 cmd line binaries. It is meant as a last resort in case the above
1899 options fail for some reason.
1900
1901 Why use sftp instead of scp? The change to sftp was made in order to
1902 allow the remote system to chroot the backup, thus providing better
1903 security and because it does not suffer from shell quoting issues like
1904 scp. Scp also does not support any kind of file listing, so sftp or
1905 ssh access will always be needed in addition for this backend mode to
1906 work properly. Sftp does not have these limitations but needs an sftp
1907 service running on the backend server, which is sometimes not an
1908 option.
1909
1910
1912 Certificate verification as implemented right now [02.2016] only in the
1913 webdav and lftp backends. older pythons 2.7.8- and older lftp binaries
1914 need a file based database of certification authority certificates
1915 (cacert file).
1916 Newer python 2.7.9+ and recent lftp versions however support the system
1917 default certificates (usually in /etc/ssl/certs) and also giving an
1918 alternative ca cert folder via --ssl-cacert-path.
1919
1920 The cacert file has to be a PEM formatted text file as currently
1921 provided by the CURL project. See
1922
1923 http://curl.haxx.se/docs/caextract.html
1924
1925 After creating/retrieving a valid cacert file you should copy it to
1926 either
1927
1928 ~/.duplicity/cacert.pem
1929 ~/duplicity_cacert.pem
1930 /etc/duplicity/cacert.pem
1931
1932 Duplicity searches it there in the same order and will fail if it can't
1933 find it. You can however specify the option --ssl-cacert-file <file>
1934 to point duplicity to a copy in a different location.
1935
1936 Finally there is the --ssl-no-check-certificate option to disable
1937 certificate verification alltogether, in case some ssl library is
1938 missing or verification is not wanted. Use it with care, as even with
1939 self signed servers manually providing the private ca certificate is
1940 definitely the safer option.
1941
1942
1944 Swift is the OpenStack Object Storage service.
1945 The backend requires python-switclient to be installed on the system.
1946 python-keystoneclient is also needed to use OpenStack's Keystone
1947 Identity service. See REQUIREMENTS.
1948
1949 It uses following environment variables for authentification:
1950 SWIFT_USERNAME (required), SWIFT_PASSWORD (required), SWIFT_AUTHURL
1951 (required), SWIFT_USERID (required, only for IBM Bluemix
1952 ObjectStorage), SWIFT_TENANTID (required, only for IBM Bluemix
1953 ObjectStorage), SWIFT_REGIONNAME (required, only for IBM Bluemix
1954 ObjectStorage), SWIFT_TENANTNAME (optional, the tenant can be included
1955 in the username)
1956
1957 If the user was previously authenticated, the following environment
1958 variables can be used instead: SWIFT_PREAUTHURL (required),
1959 SWIFT_PREAUTHTOKEN (required)
1960
1961 If SWIFT_AUTHVERSION is unspecified, it will default to version 1.
1962
1963
1965 PCA is a long-term data archival solution by OVH. It runs a slightly
1966 modified version of Openstack Swift introducing latency in the data
1967 retrieval process. It is a good pick for a multi backend configuration
1968 where receiving volumes while an other backend is used to store
1969 manifests and signatures.
1970
1971 The backend requires python-switclient to be installed on the system.
1972 python-keystoneclient is also needed to interact with OpenStack's
1973 Keystone Identity service. See REQUIREMENTS.
1974
1975 It uses following environment variables for authentification:
1976 PCA_USERNAME (required), PCA_PASSWORD (required), PCA_AUTHURL
1977 (required), PCA_USERID (optional), PCA_TENANTID (optional, but either
1978 the tenant name or tenant id must be supplied) PCA_REGIONNAME
1979 (optional), PCA_TENANTNAME (optional, but either the tenant name or
1980 tenant id must be supplied)
1981
1982 If the user was previously authenticated, the following environment
1983 variables can be used instead: PCA_PREAUTHURL (required),
1984 PCA_PREAUTHTOKEN (required)
1985
1986 If PCA_AUTHVERSION is unspecified, it will default to version 2.
1987
1988
1990 This backend requires mediafire python library to be installed on the
1991 system. See REQUIREMENTS.
1992
1993 Use URL escaping for username (and password, if provided via command
1994 line):
1995
1996
1997 mf://duplicity%40example.com@mediafire.com/some_folder
1998
1999 The destination folder will be created for you if it does not exist.
2000
2001
2003 Signing and symmetrically encrypt at the same time with the gpg binary
2004 on the command line, as used within duplicity, is a specifically
2005 challenging issue. Tests showed that the following combinations proved
2006 working.
2007
2008 1. Setup gpg-agent properly. Use the option --use-agent and enter both
2009 passphrases (symmetric and sign key) in the gpg-agent's dialog.
2010
2011 2. Use a PASSPHRASE for symmetric encryption of your choice but the
2012 signing key has an empty passphrase.
2013
2014 3. The used PASSPHRASE for symmetric encryption and the passphrase of
2015 the signing key are identical.
2016
2017
2019 Hard links currently unsupported (they will be treated as non-linked
2020 regular files).
2021
2022 Bad signatures will be treated as empty instead of logging appropriate
2023 error message.
2024
2025
2027 This section describes duplicity's basic operation and the format of
2028 its data files. It should not necessary to read this section to use
2029 duplicity.
2030
2031 The files used by duplicity to store backup data are tarfiles in GNU
2032 tar format. They can be produced independently by rdiffdir(1). For
2033 incremental backups, new files are saved normally in the tarfile. But
2034 when a file changes, instead of storing a complete copy of the file,
2035 only a diff is stored, as generated by rdiff(1). If a file is deleted,
2036 a 0 length file is stored in the tar. It is possible to restore a
2037 duplicity archive "manually" by using tar and then cp, rdiff, and rm as
2038 necessary. These duplicity archives have the extension difftar.
2039
2040 Both full and incremental backup sets have the same format. In effect,
2041 a full backup set is an incremental one generated from an empty
2042 signature (see below). The files in full backup sets will start with
2043 duplicity-full while the incremental sets start with duplicity-inc.
2044 When restoring, duplicity applies patches in order, so deleting, for
2045 instance, a full backup set may make related incremental backup sets
2046 unusable.
2047
2048 In order to determine which files have been deleted, and to calculate
2049 diffs for changed files, duplicity needs to process information about
2050 previous sessions. It stores this information in the form of tarfiles
2051 where each entry's data contains the signature (as produced by rdiff)
2052 of the file instead of the file's contents. These signature sets have
2053 the extension sigtar.
2054
2055 Signature files are not required to restore a backup set, but without
2056 an up-to-date signature, duplicity cannot append an incremental backup
2057 to an existing archive.
2058
2059 To save bandwidth, duplicity generates full signature sets and
2060 incremental signature sets. A full signature set is generated for each
2061 full backup, and an incremental one for each incremental backup. These
2062 start with duplicity-full-signatures and duplicity-new-signatures
2063 respectively. These signatures will be stored both locally and
2064 remotely. The remote signatures will be encrypted if encryption is
2065 enabled. The local signatures will not be encrypted and stored in the
2066 archive dir (see --archive-dir ).
2067
2068
2070 Duplicity requires a POSIX-like operating system with a python
2071 interpreter version 2.6+ installed. It is best used under GNU/Linux.
2072
2073 Some backends also require additional components (probably available as
2074 packages for your specific platform):
2075
2076 Amazon Drive backend
2077 python-requests - http://python-requests.org
2078 python-requests-oauthlib - https://github.com/requests/requests-
2079 oauthlib
2080
2081 azure backend (Azure Storage Blob Service)
2082 Microsoft Azure Storage Blobs client library for Python -
2083 https://pypi.org/project/azure-storage-blob/
2084
2085 boto backend (S3 Amazon Web Services, Google Cloud Storage)
2086 boto version 2.0+ - http://github.com/boto/boto
2087
2088 box backend (box.com)
2089 boxsdk - https://github.com/box/box-python-sdk
2090
2091 cfpyrax backend (Rackspace Cloud) and hubic backend (hubic.com)
2092 Rackspace CloudFiles Pyrax API -
2093 http://docs.rackspace.com/sdks/guide/content/python.html
2094
2095 dpbx backend (Dropbox)
2096 Dropbox Python SDK -
2097 https://www.dropbox.com/developers/reference/sdk
2098
2099 gdocs gdata backend (legacy Google Docs backend)
2100 Google Data APIs Python Client Library -
2101 http://code.google.com/p/gdata-python-client/
2102
2103 gdocs pydrive backend(default)
2104 see pydrive backend
2105
2106 gio backend (Gnome VFS API)
2107 PyGObject - http://live.gnome.org/PyGObject
2108 D-Bus (dbus)- http://www.freedesktop.org/wiki/Software/dbus
2109
2110 lftp backend (needed for ftp, ftps, fish [over ssh] - also supports
2111 sftp, webdav[s])
2112 LFTP Client - http://lftp.yar.ru/
2113
2114 MEGA backend (only works for accounts created prior to November 2018)
2115 (mega.nz)
2116 megatools client - https://github.com/megous/megatools
2117
2118 MEGA v2 and v3 backend (works for all MEGA accounts) (mega.nz)
2119 MEGAcmd client - https://mega.nz/cmd
2120
2121 multi backend
2122 Multi -- store to more than one backend
2123 (also see A NOTE ON MULTI BACKEND ) below.
2124
2125 ncftp backend (ftp, select via ncftp+ftp://)
2126 NcFTP - http://www.ncftp.com/
2127
2128 OneDrive backend (Microsoft OneDrive)
2129 python-requests-oauthlib - https://github.com/requests/requests-
2130 oauthlib
2131
2132 Par2 Wrapper Backend
2133 par2cmdline - http://parchive.sourceforge.net/
2134
2135 pydrive backend
2136 PyDrive -- a wrapper library of google-api-python-client -
2137 https://pypi.python.org/pypi/PyDrive
2138 (also see A NOTE ON PYDRIVE BACKEND ) below.
2139
2140 rclone backend
2141 rclone - https://rclone.org/
2142
2143 rsync backend
2144 rsync client binary - http://rsync.samba.org/
2145
2146 ssh paramiko backend (default)
2147 paramiko (SSH2 for python) -
2148 http://pypi.python.org/pypi/paramiko (downloads);
2149 http://github.com/paramiko/paramiko (project page)
2150 pycrypto (Python Cryptography Toolkit) -
2151 http://www.dlitz.net/software/pycrypto/
2152
2153 ssh pexpect backend
2154 sftp/scp client binaries OpenSSH - http://www.openssh.com/
2155 Python pexpect module -
2156 http://pexpect.sourceforge.net/pexpect.html
2157
2158 swift backend (OpenStack Object Storage)
2159 Python swiftclient module - https://github.com/openstack/python-
2160 swiftclient/
2161 Python keystoneclient module -
2162 https://github.com/openstack/python-keystoneclient/
2163
2164 webdav backend
2165 certificate authority database file for ssl certificate
2166 verification of HTTPS connections -
2167 http://curl.haxx.se/docs/caextract.html
2168 (also see A NOTE ON SSL CERTIFICATE VERIFICATION).
2169 Python kerberos module for kerberos authentication -
2170 https://github.com/02strich/pykerberos
2171
2172 MediaFire backend
2173 MediaFire Python Open SDK -
2174 https://pypi.python.org/pypi/mediafire/
2175
2176
2178 Original Author - Ben Escoto <bescoto@stanford.edu>
2179
2180 Current Maintainer - Kenneth Loafman <kenneth@loafman.com>
2181
2182 Continuous Contributors
2183 Edgar Soldin, Mike Terry
2184
2185 Most backends were contributed individually. Information about their
2186 authorship may be found in the according file's header.
2187
2188 Also we'd like to thank everybody posting issues to the mailing list or
2189 on launchpad, sending in patches or contributing otherwise. Duplicity
2190 wouldn't be as stable and useful if it weren't for you.
2191
2192 A special thanks goes to rsync.net, a Cloud Storage provider with
2193 explicit support for duplicity, for several monetary donations and for
2194 providing a special "duplicity friends" rate for their offsite backup
2195 service. Email info@rsync.net for details.
2196
2197
2199 rdiffdir(1), python(1), rdiff(1), rdiff-backup(1).
2200
2201
2202
2203Version 0.8.21 November 09, 2021 DUPLICITY(1)