1DUPLICITY(1) User Manuals DUPLICITY(1)
2
3
4
6 duplicity - Encrypted incremental backup to local or remote storage.
7
8
10 For detailed descriptions for each command see chapter ACTIONS.
11
12 duplicity [full|incremental] [options] source_directory target_url
13
14 duplicity verify [options] [--compare-data] [--file-to-restore
15 <relpath>] [--time time] source_url target_directory
16
17 duplicity collection-status [options] [--file-changed <relpath>]
18 target_url
19
20 duplicity list-current-files [options] [--time time] target_url
21
22 duplicity [restore] [options] [--file-to-restore <relpath>] [--time
23 time] source_url target_directory
24
25 duplicity remove-older-than <time> [options] [--force] target_url
26
27 duplicity remove-all-but-n-full <count> [options] [--force] target_url
28
29 duplicity remove-all-inc-of-but-n-full <count> [options] [--force]
30 target_url
31
32 duplicity cleanup [options] [--force] target_url
33
34 duplicity replicate [options] [--time time] source_url target_url
35
36
38 Duplicity incrementally backs up files and folders into tar-format
39 volumes encrypted with GnuPG and places them to a remote (or local)
40 storage backend. See chapter URL FORMAT for a list of all supported
41 backends and how to address them. Because duplicity uses librsync,
42 incremental backups are space efficient and only record the parts of
43 files that have changed since the last backup. Currently duplicity
44 supports deleted files, full Unix permissions, uid/gid, directories,
45 symbolic links, fifos, etc., but not hard links.
46
47 If you are backing up the root directory /, remember to --exclude
48 /proc, or else duplicity will probably crash on the weird stuff in
49 there.
50
51
53 Here is an example of a backup, using sftp to back up /home/me to
54 some_dir on the other.host machine:
55
56 duplicity /home/me sftp://uid@other.host/some_dir
57
58 If the above is run repeatedly, the first will be a full backup, and
59 subsequent ones will be incremental. To force a full backup, use the
60 full action:
61
62 duplicity full /home/me sftp://uid@other.host/some_dir
63
64 or enforcing a full every other time via --full-if-older-than <time> ,
65 e.g. a full every month:
66
67 duplicity --full-if-older-than 1M /home/me
68 sftp://uid@other.host/some_dir
69
70 Now suppose we accidentally delete /home/me and want to restore it the
71 way it was at the time of last backup:
72
73 duplicity sftp://uid@other.host/some_dir /home/me
74
75 Duplicity enters restore mode because the URL comes before the local
76 directory. If we wanted to restore just the file "Mail/article" in
77 /home/me as it was three days ago into /home/me/restored_file:
78
79 duplicity -t 3D --file-to-restore Mail/article
80 sftp://uid@other.host/some_dir /home/me/restored_file
81
82 The following command compares the latest backup with the current
83 files:
84
85 duplicity verify sftp://uid@other.host/some_dir /home/me
86
87 Finally, duplicity recognizes several include/exclude options. For
88 instance, the following will backup the root directory, but exclude
89 /mnt, /tmp, and /proc:
90
91 duplicity --exclude /mnt --exclude /tmp --exclude /proc /
92 file:///usr/local/backup
93
94 Note that in this case the destination is the local directory
95 /usr/local/backup. The following will backup only the /home and /etc
96 directories under root:
97
98 duplicity --include /home --include /etc --exclude '**' /
99 file:///usr/local/backup
100
101 Duplicity can also access a repository via ftp. If a user name is
102 given, the environment variable FTP_PASSWORD is read to determine the
103 password:
104
105 FTP_PASSWORD=mypassword duplicity /local/dir
106 ftp://user@other.host/some_dir
107
108
110 Duplicity knows action commands, which can be finetuned with options.
111 The actions for backup (full,incr) and restoration (restore) can as
112 well be left out as duplicity detects in what mode it should switch to
113 by the order of target URL and local folder. If the target URL comes
114 before the local folder a restore is in order, is the local folder
115 before target URL then this folder is about to be backed up to the
116 target URL.
117 If a backup is in order and old signatures can be found duplicity
118 automatically performs an incremental backup.
119
120 Note: The following explanations explain some but not all options that
121 can be used in connection with that action command. Consult the
122 OPTIONS section for more detailed informations.
123
124
125 full <folder> <url>
126 Perform a full backup. A new backup chain is started even if
127 signatures are available for an incremental backup.
128
129
130 incr <folder> <url>
131 If this is requested an incremental backup will be performed.
132 Duplicity will abort if no old signatures can be found.
133
134
135 verify [--compare-data] [--time <time>] [--file-to-restore <rel_path>]
136 <url> <local_path>
137 Verify tests the integrity of the backup archives at the remote
138 location by downloading each file and checking both that it can
139 restore the archive and that the restored file matches the
140 signature of that file stored in the backup, i.e. compares the
141 archived file with its hash value from archival time. Verify
142 does not actually restore and will not overwrite any local
143 files. Duplicity will exit with a non-zero error level if any
144 files do not match the signature stored in the archive for that
145 file. On verbosity level 4 or higher, it will log a message for
146 each file that differs from the stored signature. Files must be
147 downloaded to the local machine in order to compare them.
148 Verify does not compare the backed-up version of the file to the
149 current local copy of the files unless the --compare-data option
150 is used (see below).
151 The --file-to-restore option restricts verify to that file or
152 folder. The --time option allows to select a backup to verify.
153 The --compare-data option enables data comparison (see below).
154
155
156 collection-status [--file-changed <relpath>]<url>
157 Summarize the status of the backup repository by printing the
158 chains and sets found, and the number of volumes in each.
159
160
161 list-current-files [--time <time>] <url>
162 Lists the files contained in the most current backup or backup
163 at time. The information will be extracted from the signature
164 files, not the archive data itself. Thus the whole archive does
165 not have to be downloaded, but on the other hand if the archive
166 has been deleted or corrupted, this command will not detect it.
167
168
169 restore [--file-to-restore <relpath>] [--time <time>] <url>
170 <target_folder>
171 You can restore the full monty or selected folders/files from a
172 specific time. Use the relative path as it is printed by list-
173 current-files. Usually not needed as duplicity enters restore
174 mode when it detects that the URL comes before the local folder.
175
176
177 remove-older-than <time> [--force] <url>
178 Delete all backup sets older than the given time. Old backup
179 sets will not be deleted if backup sets newer than time depend
180 on them. See the TIME FORMATS section for more information.
181 Note, this action cannot be combined with backup or other
182 actions, such as cleanup. Note also that --force will be needed
183 to delete the files instead of just listing them.
184
185
186 remove-all-but-n-full <count> [--force] <url>
187 Delete all backups sets that are older than the count:th last
188 full backup (in other words, keep the last count full backups
189 and associated incremental sets). count must be larger than
190 zero. A value of 1 means that only the single most recent backup
191 chain will be kept. Note that --force will be needed to delete
192 the files instead of just listing them.
193
194
195 remove-all-inc-of-but-n-full <count> [--force] <url>
196 Delete incremental sets of all backups sets that are older than
197 the count:th last full backup (in other words, keep only old
198 full backups and not their increments). count must be larger
199 than zero. A value of 1 means that only the single most recent
200 backup chain will be kept intact. Note that --force will be
201 needed to delete the files instead of just listing them.
202
203
204 cleanup [--force] <url>
205 Delete the extraneous duplicity files on the given backend.
206 Non-duplicity files, or files in complete data sets will not be
207 deleted. This should only be necessary after a duplicity
208 session fails or is aborted prematurely. Note that --force will
209 be needed to delete the files instead of just listing them.
210
211
212 replicate [--time time] <source_url> <target_url>
213 Replicate backup sets from source to target backend. Files will
214 be (re)-encrypted and (re)-compressed depending on normal
215 backend options. Signatures and volumes will not get recomputed,
216 thus options like --volsize or --max-blocksize have no effect.
217 When --time time is given, only backup sets older than time will
218 be replicated.
219
220
222 --allow-source-mismatch
223 Do not abort on attempts to use the same archive dir or remote
224 backend to back up different directories. duplicity will tell
225 you if you need this switch.
226
227
228 --archive-dir path
229 The archive directory. NOTE: This option changed in 0.6.0. The
230 archive directory is now necessary in order to manage
231 persistence for current and future enhancements. As such, this
232 option is now used only to change the location of the archive
233 directory. The archive directory should not be deleted, or
234 duplicity will have to recreate it from the remote repository
235 (which may require decrypting the backup contents).
236
237 When backing up or restoring, this option specifies that the
238 local archive directory is to be created in path. If the
239 archive directory is not specified, the default will be to
240 create the archive directory in ~/.cache/duplicity/.
241
242 The archive directory can be shared between backups to multiple
243 targets, because a subdirectory of the archive dir is used for
244 individual backups (see --name ).
245
246 The combination of archive directory and backup name must be
247 unique in order to separate the data of different backups.
248
249 The interaction between the --archive-dir and the --name options
250 allows for four possible combinations for the location of the
251 archive dir:
252
253
254 1. neither specified (default)
255 ~/.cache/duplicity/hash-of-url
256
257 2. --archive-dir=/arch, no --name
258 /arch/hash-of-url
259
260 3. no --archive-dir, --name=foo
261 ~/.cache/duplicity/foo
262
263 4. --archive-dir=/arch, --name=foo
264 /arch/foo
265
266
267 --asynchronous-upload
268 (EXPERIMENTAL) Perform file uploads asynchronously in the
269 background, with respect to volume creation. This means that
270 duplicity can upload a volume while, at the same time, preparing
271 the next volume for upload. The intended end-result is a faster
272 backup, because the local CPU and your bandwidth can be more
273 consistently utilized. Use of this option implies additional
274 need for disk space in the temporary storage location; rather
275 than needing to store only one volume at a time, enough storage
276 space is required to store two volumes.
277
278
279 --backend-retry-delay number
280 Specifies the number of seconds that duplicity waits after an
281 error has occured before attempting to repeat the operation.
282
283
284
285 --cf-backend backend
286 Allows the explicit selection of a cloudfiles backend. Defaults
287 to pyrax. Alternatively you might choose cloudfiles.
288
289
290 --b2-hide-files
291 Causes Duplicity to hide files in B2 instead of deleting them.
292 Useful in combination with B2's lifecycle rules.
293
294
295 --compare-data
296 Enable data comparison of regular files on action verify. This
297 conducts a verify as described above to verify the integrity of
298 the backup archives, but additionally compares restored files to
299 those in target_directory. Duplicity will not replace any files
300 in target_directory. Duplicity will exit with a non-zero error
301 level if the files do not correctly verify or if any files from
302 the archive differ from those in target_directory. On verbosity
303 level 4 or higher, it will log a message for each file that
304 differs from its equivalent in target_directory.
305
306
307 --copy-links
308 Resolve symlinks during backup. Enabling this will resolve &
309 back up the symlink's file/folder data instead of the symlink
310 itself, potentially increasing the size of the backup.
311
312
313 --dry-run
314 Calculate what would be done, but do not perform any backend
315 actions
316
317
318 --encrypt-key key-id
319 When backing up, encrypt to the given public key, instead of
320 using symmetric (traditional) encryption. Can be specified
321 multiple times. The key-id can be given in any of the formats
322 supported by GnuPG; see gpg(1), section "HOW TO SPECIFY A USER
323 ID" for details.
324
325
326
327 --encrypt-secret-keyring filename
328 This option can only be used with --encrypt-key, and changes the
329 path to the secret keyring for the encrypt key to filename This
330 keyring is not used when creating a backup. If not specified,
331 the default secret keyring is used which is usually located at
332 .gnupg/secring.gpg
333
334
335 --encrypt-sign-key key-id
336 Convenience parameter. Same as --encrypt-key key-id --sign-key
337 key-id.
338
339
340 --exclude shell_pattern
341 Exclude the file or files matched by shell_pattern. If a
342 directory is matched, then files under that directory will also
343 be matched. See the FILE SELECTION section for more
344 information.
345
346
347 --exclude-device-files
348 Exclude all device files. This can be useful for
349 security/permissions reasons or if duplicity is not handling
350 device files correctly.
351
352
353 --exclude-filelist filename
354 Excludes the files listed in filename, with each line of the
355 filelist interpreted according to the same rules as --include
356 and --exclude. See the FILE SELECTION section for more
357 information.
358
359
360 --exclude-if-present filename
361 Exclude directories if filename is present. Allows the user to
362 specify folders that they do not wish to backup by adding a
363 specified file (e.g. ".nobackup") instead of maintaining a
364 comprehensive exclude/include list.
365
366
367 --exclude-older-than time
368 Exclude any files whose modification date is earlier than the
369 specified time. This can be used to produce a partial backup
370 that contains only recently changed files. See the TIME FORMATS
371 section for more information.
372
373
374 --exclude-other-filesystems
375 Exclude files on file systems (identified by device number)
376 other than the file system the root of the source directory is
377 on.
378
379
380 --exclude-regexp regexp
381 Exclude files matching the given regexp. Unlike the --exclude
382 option, this option does not match files in a directory it
383 matches. See the FILE SELECTION section for more information.
384
385
386 --file-prefix, --file-prefix-manifest, --file-prefix-archive, --file-
387 prefix-signature
388 Adds a prefix to all files, manifest files, archive files,
389 and/or signature files.
390
391 The same set of prefixes must be passed in on backup and
392 restore.
393
394 If both global and type-specific prefixes are set, global prefix
395 will go before type-specific prefixes.
396
397 See also A NOTE ON FILENAME PREFIXES
398
399
400 --file-to-restore path
401 This option may be given in restore mode, causing only path to
402 be restored instead of the entire contents of the backup
403 archive. path should be given relative to the root of the
404 directory backed up.
405
406
407 --full-if-older-than time
408 Perform a full backup if an incremental backup is requested, but
409 the latest full backup in the collection is older than the given
410 time. See the TIME FORMATS section for more information.
411
412
413 --force
414 Proceed even if data loss might result. Duplicity will let the
415 user know when this option is required.
416
417
418 --ftp-passive
419 Use passive (PASV) data connections. The default is to use
420 passive, but to fallback to regular if the passive connection
421 fails or times out.
422
423
424 --ftp-regular
425 Use regular (PORT) data connections.
426
427
428 --gio Use the GIO backend and interpret any URLs as GIO would.
429
430
431 --hidden-encrypt-key key-id
432 Same as --encrypt-key, but it hides user's key id from encrypted
433 file. It uses the gpg's --hidden-recipient command to obfuscate
434 the owner of the backup. On restore, gpg will automatically try
435 all available secret keys in order to decrypt the backup. See
436 gpg(1) for more details.
437
438
439
440 --ignore-errors
441 Try to ignore certain errors if they happen. This option is only
442 intended to allow the restoration of a backup in the face of
443 certain problems that would otherwise cause the backup to fail.
444 It is not ever recommended to use this option unless you have a
445 situation where you are trying to restore from backup and it is
446 failing because of an issue which you want duplicity to ignore.
447 Even then, depending on the issue, this option may not have an
448 effect.
449
450 Please note that while ignored errors will be logged, there will
451 be no summary at the end of the operation to tell you what was
452 ignored, if anything. If this is used for emergency restoration
453 of data, it is recommended that you run the backup in such a way
454 that you can revisit the backup log (look for lines containing
455 the string IGNORED_ERROR).
456
457 If you ever have to use this option for reasons that are not
458 understood or understood but not your own responsibility, please
459 contact duplicity maintainers. The need to use this option under
460 production circumstances would normally be considered a bug.
461
462
463 --imap-full-address email_address
464 The full email address of the user name when logging into an
465 imap server. If not supplied just the user name part of the
466 email address is used.
467
468
469 --imap-mailbox option
470 Allows you to specify a different mailbox. The default is
471 "INBOX". Other languages may require a different mailbox than
472 the default.
473
474
475 --gpg-binary file_path
476 Allows you to force duplicity to use file_path as gpg command
477 line binary. Can be an absolute or relative file path or a file
478 name. Default value is 'gpg'. The binary will be localized via
479 the PATH environment variable.
480
481
482 --gpg-options options
483 Allows you to pass options to gpg encryption. The options list
484 should be of the form "--opt1 --opt2=parm" where the string is
485 quoted and the only spaces allowed are between options.
486
487
488 --include shell_pattern
489 Similar to --exclude but include matched files instead. Unlike
490 --exclude, this option will also match parent directories of
491 matched files (although not necessarily their contents). See
492 the FILE SELECTION section for more information.
493
494
495 --include-filelist filename
496 Like --exclude-filelist, but include the listed files instead.
497 See the FILE SELECTION section for more information.
498
499
500 --include-regexp regexp
501 Include files matching the regular expression regexp. Only
502 files explicitly matched by regexp will be included by this
503 option. See the FILE SELECTION section for more information.
504
505
506 --log-fd number
507 Write specially-formatted versions of output messages to the
508 specified file descriptor. The format used is designed to be
509 easily consumable by other programs.
510
511
512 --log-file filename
513 Write specially-formatted versions of output messages to the
514 specified file. The format used is designed to be easily
515 consumable by other programs.
516
517
518 --max-blocksize number
519 determines the number of the blocks examined for changes during
520 the diff process. For files < 1MB the blocksize is a constant
521 of 512. For files over 1MB the size is given by:
522
523 file_blocksize = int((file_len / (2000 * 512)) * 512)
524 return min(file_blocksize, config.max_blocksize)
525
526 where config.max_blocksize defaults to 2048. If you specify a
527 larger max_blocksize, your difftar files will be larger, but
528 your sigtar files will be smaller. If you specify a smaller
529 max_blocksize, the reverse occurs. The --max-blocksize option
530 should be in multiples of 512.
531
532
533 --name symbolicname
534 Set the symbolic name of the backup being operated on. The
535 intent is to use a separate name for each logically distinct
536 backup. For example, someone may use "home_daily_s3" for the
537 daily backup of a home directory to Amazon S3. The structure of
538 the name is up to the user, it is only important that the names
539 be distinct. The symbolic name is currently only used to affect
540 the expansion of --archive-dir , but may be used for additional
541 features in the future. Users running more than one distinct
542 backup are encouraged to use this option.
543
544 If not specified, the default value is a hash of the backend
545 URL.
546
547
548 --no-compression
549 Do not use GZip to compress files on remote system.
550
551
552 --no-encryption
553 Do not use GnuPG to encrypt files on remote system.
554
555
556 --no-print-statistics
557 By default duplicity will print statistics about the current
558 session after a successful backup. This switch disables that
559 behavior.
560
561
562 --null-separator
563 Use nulls (\0) instead of newlines (\n) as line separators,
564 which may help when dealing with filenames containing newlines.
565 This affects the expected format of the files specified by the
566 --{include|exclude}-filelist switches as well as the format of
567 the directory statistics file.
568
569
570 --numeric-owner
571 On restore always use the numeric uid/gid from the archive and
572 not the archived user/group names, which is the default
573 behaviour. Recommended for restoring from live cds which might
574 have the users with identical names but different uids/gids.
575
576
577 --do-not-restore-ownership
578 Ignores the uid/gid from the archive and keeps the current
579 user's one. Recommended for restoring data to mounted
580 filesystem which do not support Unix ownership or when root
581 privileges are not available.
582
583
584 --num-retries number
585 Number of retries to make on errors before giving up.
586
587
588 --old-filenames
589 Use the old filename format (incompatible with Windows/Samba)
590 rather than the new filename format.
591
592
593 --par2-options options
594 Verbatim options to pass to par2.
595
596
597 --par2-redundancy percent
598 Adjust the level of redundancy in percent for Par2 recovery
599 files (default 10%).
600
601
602 --progress
603 When selected, duplicity will output the current upload progress
604 and estimated upload time. To annotate changes, it will perform
605 a first dry-run before a full or incremental, and then runs the
606 real operation estimating the real upload progress.
607
608
609 --progress-rate number
610 Sets the update rate at which duplicity will output the upload
611 progress messages (requires --progress option). Default is to
612 print the status each 3 seconds.
613
614
615 --rename <original path> <new path>
616 Treats the path orig in the backup as if it were the path new.
617 Can be passed multiple times. An example:
618
619 duplicity restore --rename Documents/metal Music/metal
620 sftp://uid@other.host/some_dir /home/me
621
622
623 --rsync-options options
624 Allows you to pass options to the rsync backend. The options
625 list should be of the form "opt1=parm1 opt2=parm2" where the
626 option string is quoted and the only spaces allowed are between
627 options. The option string will be passed verbatim to rsync,
628 after any internally generated option designating the remote
629 port to use. Here is a possibly useful example:
630
631 duplicity --rsync-options="--partial-dir=.rsync-partial"
632 /home/me rsync://uid@other.host/some_dir
633
634
635 --s3-european-buckets
636 When using the Amazon S3 backend, create buckets in Europe
637 instead of the default (requires --s3-use-new-style ). Also see
638 the EUROPEAN S3 BUCKETS section.
639
640 This option does not apply when using the newer boto3 backend,
641 which does not create buckets.
642
643 See also A NOTE ON AMAZON S3 below.
644
645
646 --s3-unencrypted-connection
647 Don't use SSL for connections to S3.
648
649 This may be much faster, at some cost to confidentiality.
650
651 With this option, anyone who can observe traffic between your
652 computer and S3 will be able to tell: that you are using
653 Duplicity, the name of the bucket, your AWS Access Key ID, the
654 increment dates and the amount of data in each increment.
655
656 This option affects only the connection, not the GPG encryption
657 of the backup increment files. Unless that is disabled, an
658 observer will not be able to see the file names or contents.
659
660 This option is not available when using the newer boto3 backend.
661
662 See also A NOTE ON AMAZON S3 below.
663
664
665 --s3-use-new-style
666 When operating on Amazon S3 buckets, use new-style subdomain
667 bucket addressing. This is now the preferred method to access
668 Amazon S3, but is not backwards compatible if your bucket name
669 contains upper-case characters or other characters that are not
670 valid in a hostname.
671
672 This option has no effect when using the newer boto3 backend,
673 which will always use new style subdomain bucket naming.
674
675 See also A NOTE ON AMAZON S3 below.
676
677
678 --s3-use-rrs
679 Store volumes using Reduced Redundancy Storage when uploading to
680 Amazon S3. This will lower the cost of storage but also lower
681 the durability of stored volumes to 99.99% instead the
682 99.999999999% durability offered by Standard Storage on S3.
683
684
685 --s3-use-ia
686 Store volumes using Standard - Infrequent Access when uploading
687 to Amazon S3. This storage class has a lower storage cost but a
688 higher per-request cost, and the storage cost is calculated
689 against a 30-day storage minimum. According to Amazon, this
690 storage is ideal for long-term file storage, backups, and
691 disaster recovery.
692
693
694 --s3-use-onezone-ia
695 Store volumes using One Zone - Infrequent Access when uploading
696 to Amazon S3. This storage is similar to Standard - Infrequent
697 Access, but only stores object data in one Availability Zone.
698
699
700 --s3-use-glacier
701 Store volumes using Glacier S3 when uploading to Amazon S3. This
702 storage class has a lower cost of storage but a higher per-
703 request cost along with delays of up to 12 hours from the time
704 of retrieval request. This storage cost is calculated against a
705 90-day storage minimum. According to Amazon this storage is
706 ideal for data archiving and long-term backup offering
707 99.999999999% durability. To restore a backup you will have to
708 manually migrate all data stored on AWS Glacier back to Standard
709 S3 and wait for AWS to complete the migration. Notice:
710 Duplicity will store the manifest.gpg files from full and
711 incremental backups on AWS S3 standard storage to allow quick
712 retrieval for later incremental backups, all other data is
713 stored in S3 Glacier.
714
715
716 --s3-use-deep-archive
717 Store volumes using Glacier Deep Archive S3 when uploading to
718 Amazon S3. This storage class has a lower cost of storage but a
719 higher per-request cost along with delays of up to 48 hours from
720 the time of retrieval request. This storage cost is calculated
721 against a 180-day storage minimum. According to Amazon this
722 storage is ideal for data archiving and long-term backup
723 offering 99.999999999% durability. To restore a backup you will
724 have to manually migrate all data stored on AWS Glacier Deep
725 Archive back to Standard S3 and wait for AWS to complete the
726 migration. Notice: Duplicity will store the manifest.gpg files
727 from full and incremental backups on AWS S3 standard storage to
728 allow quick retrieval for later incremental backups, all other
729 data is stored in S3 Glacier Deep Archive.
730
731 Glacier Deep Archive is only available when using the newer
732 boto3 backend.
733
734
735 --s3-use-multiprocessing
736 Allow multipart volumne uploads to S3 through multiprocessing.
737 This option requires Python 2.6 and can be used to make uploads
738 to S3 more efficient. If enabled, files duplicity uploads to S3
739 will be split into chunks and uploaded in parallel. Useful if
740 you want to saturate your bandwidth or if large files are
741 failing during upload.
742
743 This has no effect when using the newer boto3 backend. Boto3
744 always attempts to multiprocessing when it is believed it will
745 be more efficient.
746
747 See also A NOTE ON AMAZON S3 below.
748
749
750 --s3-use-server-side-encryption
751 Allow use of server side encryption in S3
752
753
754 --s3-multipart-chunk-size
755 Chunk size (in MB) used for S3 multipart uploads. Make this
756 smaller than --volsize to maximize the use of your bandwidth.
757 For example, a chunk size of 10MB with a volsize of 30MB will
758 result in 3 chunks per volume upload.
759
760 This has no effect when using the newer boto3 backend.
761
762 See also A NOTE ON AMAZON S3 below.
763
764
765 --s3-multipart-max-procs
766 Specify the maximum number of processes to spawn when performing
767 a multipart upload to S3. By default, this will choose the
768 number of processors detected on your system (e.g. 4 for a
769 4-core system). You can adjust this number as required to ensure
770 you don't overload your system while maximizing the use of your
771 bandwidth.
772
773 This has no effect when using the newer boto3 backend.
774
775 See also A NOTE ON AMAZON S3 below.
776
777
778 --s3-multipart-max-timeout
779 You can control the maximum time (in seconds) a multipart upload
780 can spend on uploading a single chunk to S3. This may be useful
781 if you find your system hanging on multipart uploads or if you'd
782 like to control the time variance when uploading to S3 to ensure
783 you kill connections to slow S3 endpoints.
784
785 This has no effect when using the newer boto3 backend.
786
787 See also A NOTE ON AMAZON S3 below.
788
789
790 --azure-blob-tier
791 Standard storage tier used for backup files (Hot|Cool|Archive).
792
793
794 --azure-max-single-put-size
795 Specify the number of the largest supported upload size where
796 the Azure library makes only one put call. If the content size
797 is known and below this value the Azure library will only
798 perform one put request to upload one block. The number is
799 expected to be in bytes.
800
801
802 --azure-max-block-size
803 Specify the number for the block size used by the Azure library
804 to upload blobs if it is split into multiple blocks. The
805 maximum block size the service supports is 104857600 (100MiB)
806 and the default is 4194304 (4MiB)
807
808
809 --azure-max-connections
810 Specify the number of maximum connections to transfer one blob
811 to Azure blob size exceeds 64MB. The default values is 2.
812
813
814 --scp-command command
815 (only ssh pexpect backend with --use-scp enabled) The command
816 will be used instead of "scp" to send or receive files. To list
817 and delete existing files, the sftp command is used.
818 See also A NOTE ON SSH BACKENDS section SSH pexpect backend.
819
820
821 --sftp-command command
822 (only ssh pexpect backend) The command will be used instead of
823 "sftp".
824 See also A NOTE ON SSH BACKENDS section SSH pexpect backend.
825
826
827 --short-filenames
828 If this option is specified, the names of the files duplicity
829 writes will be shorter (about 30 chars) but less understandable.
830 This may be useful when backing up to MacOS or another OS or FS
831 that doesn't support long filenames.
832
833
834 --sign-key key-id
835 This option can be used when backing up, restoring or verifying.
836 When backing up, all backup files will be signed with keyid key.
837 When restoring, duplicity will signal an error if any remote
838 file is not signed with the given key-id. The key-id can be
839 given in any of the formats supported by GnuPG; see gpg(1),
840 section "HOW TO SPECIFY A USER ID" for details. Should be
841 specified only once because currently only one signing key is
842 supported. Last entry overrides all other entries.
843 See also A NOTE ON SYMMETRIC ENCRYPTION AND SIGNING
844
845
846 --ssh-askpass
847 Tells the ssh backend to prompt the user for the remote system
848 password, if it was not defined in target url and no
849 FTP_PASSWORD env var is set. This password is also used for
850 passphrase-protected ssh keys.
851
852
853 --ssh-options options
854 Allows you to pass options to the ssh backend. Can be specified
855 multiple times or as a space separated options list. The
856 options list should be of the form "-oOpt1='parm1'
857 -oOpt2='parm2'" where the option string is quoted and the only
858 spaces allowed are between options. The option string will be
859 passed verbatim to both scp and sftp, whose command line syntax
860 differs slightly hence the options should therefore be given in
861 the long option format described in ssh_config(5).
862
863 example of a list:
864
865 duplicity --ssh-options="-oProtocol=2
866 -oIdentityFile='/my/backup/id'" /home/me
867 scp://user@host/some_dir
868
869 example with multiple parameters:
870
871 duplicity --ssh-options="-oProtocol=2" --ssh-
872 options="-oIdentityFile='/my/backup/id'" /home/me
873 scp://user@host/some_dir
874
875 NOTE: The ssh paramiko backend currently supports only the -i or
876 -oIdentityFile setting. If needed provide more host specific
877 options via ssh_config file.
878
879
880 --ssl-cacert-file file
881 (only webdav & lftp backend) Provide a cacert file for ssl
882 certificate verification.
883 See also A NOTE ON SSL CERTIFICATE VERIFICATION.
884
885
886 --ssl-cacert-path path/to/certs/
887 (only webdav backend and python 2.7.9+ OR lftp+webdavs and a
888 recent lftp) Provide a path to a folder containing cacert files
889 for ssl certificate verification.
890 See also A NOTE ON SSL CERTIFICATE VERIFICATION.
891
892
893 --ssl-no-check-certificate
894 (only webdav & lftp backend) Disable ssl certificate
895 verification.
896 See also A NOTE ON SSL CERTIFICATE VERIFICATION.
897
898
899 --swift-storage-policy
900 Use this storage policy when operating on Swift containers.
901 See also A NOTE ON SWIFT (OPENSTACK OBJECT STORAGE) ACCESS.
902
903
904 --metadata-sync-mode mode
905 This option defaults to 'partial', but you can set it to 'full'
906 Use 'partial' to avoid syncing metadata for backup chains that
907 you are not going to use. This saves time when restoring for
908 the first time, and lets you restore an old backup that was
909 encrypted with a different passphrase by supplying only the
910 target passphrase.
911 Use 'full' to sync metadata for all backup chains on the remote.
912
913
914 --tempdir directory
915 Use this existing directory for duplicity temporary files
916 instead of the system default, which is usually the /tmp
917 directory. This option supersedes any environment variable.
918 See also ENVIRONMENT VARIABLES.
919
920
921 -ttime, --time time, --restore-time time
922 Specify the time from which to restore or list files.
923
924
925 --time-separator char
926 Use char as the time separator in filenames instead of colon
927 (":").
928
929
930 --timeout seconds
931 Use seconds as the socket timeout value if duplicity begins to
932 timeout during network operations. The default is 30 seconds.
933
934
935 --use-agent
936 If this option is specified, then --use-agent is passed to the
937 GnuPG encryption process and it will try to connect to gpg-agent
938 before it asks for a passphrase for --encrypt-key or --sign-key
939 if needed.
940 Note: Contrary to previous versions of duplicity, this option
941 will also be honored by GnuPG 2 and newer versions. If GnuPG 2
942 is in use, duplicity passes the option --pinentry-mode=loopback
943 to the the gpg process unless --use-agent is specified on the
944 duplicity command line. This has the effect that GnuPG 2 uses
945 the agent only if --use-agent is given, just like GnuPG 1.
946
947
948 --verbosity level, -vlevel
949 Specify output verbosity level (log level). Named levels and
950 corresponding values are 0 Error, 2 Warning, 4 Notice (default),
951 8 Info, 9 Debug (noisiest).
952 level may also be
953 a character: e, w, n, i, d
954 a word: error, warning, notice, info, debug
955
956 The options -v4, -vn and -vnotice are functionally equivalent,
957 as are the mixed/upper-case versions -vN, -vNotice and -vNOTICE.
958
959
960 --version
961 Print duplicity's version and quit.
962
963
964 --volsize number
965 Change the volume size to number MB. Default is 200MB.
966
967
969 TMPDIR, TEMP, TMP
970 In decreasing order of importance, specifies the directory to
971 use for temporary files (inherited from Python's tempfile
972 module). Eventually the option --tempdir supercedes any of
973 these.
974
975 FTP_PASSWORD
976 Supported by most backends which are password capable. More
977 secure than setting it in the backend url (which might be
978 readable in the operating systems process listing to other users
979 on the same machine).
980
981 PASSPHRASE
982 This passphrase is passed to GnuPG. If this is not set, the user
983 will be prompted for the passphrase.
984
985 SIGN_PASSPHRASE
986 The passphrase to be used for --sign-key. If ommitted and sign
987 key is also one of the keys to encrypt against PASSPHRASE will
988 be reused instead. Otherwise, if passphrase is needed but not
989 set the user will be prompted for it.
990
991 Other environment variables may be used to configure specific
992 backends. See the notes for the particular backend.
993
994
996 Duplicity uses the URL format (as standard as possible) to define data
997 locations. The generic format for a URL is:
998
999 scheme://[user[:password]@]host[:port]/[/]path
1000
1001 It is not recommended to expose the password on the command line since
1002 it could be revealed to anyone with permissions to do process listings,
1003 it is permitted however. Consider setting the environment variable
1004 FTP_PASSWORD instead, which is used by most, if not all backends,
1005 regardless of it's name.
1006
1007 In protocols that support it, the path may be preceded by a single
1008 slash, '/path', to represent a relative path to the target home
1009 directory, or preceded by a double slash, '//path', to represent an
1010 absolute filesystem path.
1011
1012 Note:
1013 Scheme (protocol) access may be provided by more than one
1014 backend. In case the default backend is buggy or simply not
1015 working in a specific case it might be worth trying an
1016 alternative implementation. Alternative backends can be
1017 selected by prefixing the scheme with the name of the
1018 alternative backend e.g. ncftp+ftp:// and are mentioned below
1019 the scheme's syntax summary.
1020
1021
1022 Formats of each of the URL schemes follow:
1023
1024
1025 Amazon Drive Backend
1026
1027 ad://some_dir
1028
1029 See also A NOTE ON AMAZON DRIVE
1030
1031 Azure
1032
1033 azure://container-name
1034
1035 See also A NOTE ON AZURE ACCESS
1036
1037 B2
1038
1039 b2://account_id[:application_key]@bucket_name/[folder/]
1040
1041 Box
1042
1043 box:///some_dir[?config=path_to_config]
1044
1045 See also A NOTE ON BOX ACCESS
1046
1047 Cloud Files (Rackspace)
1048
1049 cf+http://container_name
1050
1051 See also A NOTE ON CLOUD FILES ACCESS
1052
1053 Dropbox
1054
1055 dpbx:///some_dir
1056
1057 Make sure to read A NOTE ON DROPBOX ACCESS first!
1058
1059 Local file path
1060
1061 file://[relative|/absolute]/local/path
1062
1063 FISH (Files transferred over Shell protocol) over ssh
1064
1065 fish://user[:password]@other.host[:port]/[relative|/absolute]_path
1066
1067 FTP
1068
1069 ftp[s]://user[:password]@other.host[:port]/some_dir
1070
1071 NOTE: use lftp+, ncftp+ prefixes to enforce a specific backend,
1072 default is lftp+ftp://...
1073
1074 Google Docs
1075
1076 gdocs://user[:password]@other.host/some_dir
1077
1078 NOTE: use pydrive+, gdata+ prefixes to enforce a specific
1079 backend, default is pydrive+gdocs://...
1080
1081 Google Cloud Storage
1082
1083 gs://bucket[/prefix]
1084
1085 HSI
1086
1087 hsi://user[:password]@other.host/some_dir
1088
1089 hubiC
1090
1091 cf+hubic://container_name
1092
1093 See also A NOTE ON HUBIC
1094
1095 IMAP email storage
1096
1097 imap[s]://user[:password]@host.com[/from_address_prefix]
1098
1099 See also A NOTE ON IMAP
1100
1101 MEGA.nz cloud storage (only works for accounts created prior to
1102 November 2018, uses "megatools")
1103
1104 mega://user[:password]@mega.nz/some_dir
1105
1106 NOTE: if not given in the URL, relies on password being stored
1107 within $HOME/.megarc (as used by the "megatools" utilities)
1108
1109 MEGA.nz cloud storage (works for all MEGA accounts, uses "MEGAcmd"
1110 tools)
1111
1112 megav2://user[:password]@mega.nz/some_dir
1113 megav3://user[:password]@mega.nz/some_dir[?no_logout=1] (For
1114 latest MEGAcmd)
1115
1116 NOTE: despite "MEGAcmd" no longer uses a configuration file, for
1117 convenience storing the user password this backend searches it
1118 in the $HOME/.megav2rc file (same syntax as the old
1119 $HOME/.megarc)
1120 [Login]
1121 Username = MEGA_USERNAME
1122 Password = MEGA_PASSWORD
1123
1124 OneDrive Backend
1125
1126 onedrive://some_dir
1127
1128 Par2 Wrapper Backend
1129
1130 par2+scheme://[user[:password]@]host[:port]/[/]path
1131
1132 See also A NOTE ON PAR2 WRAPPER BACKEND
1133
1134 Rclone Backend
1135
1136 rclone://remote:/some_dir
1137
1138 See also A NOTE ON RCLONE BACKEND
1139
1140 Rsync via daemon
1141
1142 rsync://user[:password]@host.com[:port]::[/]module/some_dir
1143
1144 Rsync over ssh (only key auth)
1145
1146 rsync://user@host.com[:port]/[relative|/absolute]_path
1147
1148 S3 storage (Amazon)
1149
1150 s3://host[:port]/bucket_name[/prefix]
1151 s3+http://bucket_name[/prefix]
1152 defaults to the legacy boto backend based on boto v2 (last
1153 update 2018/07)
1154 alternatively try the newer boto3+s3://bucket_name[/prefix]
1155
1156 For details see A NOTE ON AMAZON S3 and see also A NOTE ON
1157 EUROPEAN S3 BUCKETS below.
1158
1159 SCP/SFTP access
1160
1161 scp://.. or
1162 sftp://user[:password]@other.host[:port]/[relative|/absolute]_path
1163
1164 defaults are paramiko+scp:// and paramiko+sftp://
1165 alternatively try pexpect+scp://, pexpect+sftp://, lftp+sftp://
1166 See also --ssh-askpass, --ssh-options and A NOTE ON SSH
1167 BACKENDS.
1168
1169 Swift (Openstack)
1170
1171 swift://container_name[/prefix]
1172
1173 See also A NOTE ON SWIFT (OPENSTACK OBJECT STORAGE) ACCESS
1174
1175 Public Cloud Archive (OVH)
1176
1177 pca://container_name[/prefix]
1178
1179 See also A NOTE ON PCA ACCESS
1180
1181 Tahoe-LAFS
1182
1183 tahoe://alias/directory
1184
1185 WebDAV
1186
1187 webdav[s]://user[:password]@other.host[:port]/some_dir
1188
1189 alternatively try lftp+webdav[s]://
1190
1191 pydrive
1192
1193 pydrive://<service account' email
1194 address>@developer.gserviceaccount.com/some_dir
1195
1196 See also A NOTE ON PYDRIVE BACKEND below.
1197
1198 gdrive
1199
1200 gdrive://<service account' email
1201 address>@developer.gserviceaccount.com/some_dir
1202
1203 See also A NOTE ON GDRIVE BACKEND below.
1204
1205 multi
1206
1207 multi:///path/to/config.json
1208
1209 See also A NOTE ON MULTI BACKEND below.
1210
1211 MediaFire
1212
1213 mf://user[:password]@mediafire.com/some_dir
1214
1215 See also A NOTE ON MEDIAFIRE BACKEND below.
1216
1217
1219 duplicity uses time strings in two places. Firstly, many of the files
1220 duplicity creates will have the time in their filenames in the w3
1221 datetime format as described in a w3 note at http://www.w3.org/TR/NOTE-
1222 datetime. Basically they look like "2001-07-15T04:09:38-07:00", which
1223 means what it looks like. The "-07:00" section means the time zone is
1224 7 hours behind UTC.
1225
1226 Secondly, the -t, --time, and --restore-time options take a time
1227 string, which can be given in any of several formats:
1228
1229 1. the string "now" (refers to the current time)
1230
1231 2. a sequences of digits, like "123456890" (indicating the time in
1232 seconds after the epoch)
1233
1234 3. A string like "2002-01-25T07:00:00+02:00" in datetime format
1235
1236 4. An interval, which is a number followed by one of the characters
1237 s, m, h, D, W, M, or Y (indicating seconds, minutes, hours,
1238 days, weeks, months, or years respectively), or a series of such
1239 pairs. In this case the string refers to the time that preceded
1240 the current time by the length of the interval. For instance,
1241 "1h78m" indicates the time that was one hour and 78 minutes ago.
1242 The calendar here is unsophisticated: a month is always 30 days,
1243 a year is always 365 days, and a day is always 86400 seconds.
1244
1245 5. A date format of the form YYYY/MM/DD, YYYY-MM-DD, MM/DD/YYYY, or
1246 MM-DD-YYYY, which indicates midnight on the day in question,
1247 relative to the current time zone settings. For instance,
1248 "2002/3/5", "03-05-2002", and "2002-3-05" all mean March 5th,
1249 2002.
1250
1251
1253 When duplicity is run, it searches through the given source directory
1254 and backs up all the files specified by the file selection system. The
1255 file selection system comprises a number of file selection conditions,
1256 which are set using one of the following command line options:
1257 --exclude
1258 --exclude-device-files
1259 --exclude-if-present
1260 --exclude-filelist
1261 --exclude-regexp
1262 --include
1263 --include-filelist
1264 --include-regexp
1265 Each file selection condition either matches or doesn't match a given
1266 file. A given file is excluded by the file selection system exactly
1267 when the first matching file selection condition specifies that the
1268 file be excluded; otherwise the file is included.
1269
1270 For instance,
1271
1272 duplicity --include /usr --exclude /usr /usr
1273 scp://user@host/backup
1274
1275 is exactly the same as
1276
1277 duplicity /usr scp://user@host/backup
1278
1279 because the include and exclude directives match exactly the same
1280 files, and the --include comes first, giving it precedence. Similarly,
1281
1282 duplicity --include /usr/local/bin --exclude /usr/local /usr
1283 scp://user@host/backup
1284
1285 would backup the /usr/local/bin directory (and its contents), but not
1286 /usr/local/doc.
1287
1288 The include, exclude, include-filelist, and exclude-filelist options
1289 accept some extended shell globbing patterns. These patterns can
1290 contain *, **, ?, and [...] (character ranges). As in a normal shell,
1291 * can be expanded to any string of characters not containing "/", ?
1292 expands to any character except "/", and [...] expands to a single
1293 character of those characters specified (ranges are acceptable). The
1294 new special pattern, **, expands to any string of characters whether or
1295 not it contains "/". Furthermore, if the pattern starts with
1296 "ignorecase:" (case insensitive), then this prefix will be removed and
1297 any character in the string can be replaced with an upper- or lowercase
1298 version of itself.
1299
1300 Remember that you may need to quote these characters when typing them
1301 into a shell, so the shell does not interpret the globbing patterns
1302 before duplicity sees them.
1303
1304 The --exclude pattern option matches a file if:
1305
1306 1. pattern can be expanded into the file's filename, or
1307 2. the file is inside a directory matched by the option.
1308
1309 Conversely, the --include pattern matches a file if:
1310
1311 1. pattern can be expanded into the file's filename, or
1312 2. the file is inside a directory matched by the option, or
1313 3. the file is a directory which contains a file matched by the
1314 option.
1315
1316 For example,
1317
1318 --exclude /usr/local
1319
1320 matches e.g. /usr/local, /usr/local/lib, and /usr/local/lib/netscape.
1321 It is the same as --exclude /usr/local --exclude '/usr/local/**'.
1322
1323 On the other hand
1324
1325 --include /usr/local
1326
1327 specifies that /usr, /usr/local, /usr/local/lib, and
1328 /usr/local/lib/netscape (but not /usr/doc) all be backed up. Thus you
1329 don't have to worry about including parent directories to make sure
1330 that included subdirectories have somewhere to go.
1331
1332 Finally,
1333
1334 --include ignorecase:'/usr/[a-z0-9]foo/*/**.py'
1335
1336 would match a file like /usR/5fOO/hello/there/world.py. If it did
1337 match anything, it would also match /usr. If there is no existing file
1338 that the given pattern can be expanded into, the option will not match
1339 /usr alone.
1340
1341 The --include-filelist, and --exclude-filelist, options also introduce
1342 file selection conditions. They direct duplicity to read in a text
1343 file (either ASCII or UTF-8), each line of which is a file
1344 specification, and to include or exclude the matching files. Lines are
1345 separated by newlines or nulls, depending on whether the --null-
1346 separator switch was given. Each line in the filelist will be
1347 interpreted as a globbing pattern the way --include and --exclude
1348 options are interpreted, except that lines starting with "+ " are
1349 interpreted as include directives, even if found in a filelist
1350 referenced by --exclude-filelist. Similarly, lines starting with "- "
1351 exclude files even if they are found within an include filelist.
1352
1353 For example, if file "list.txt" contains the lines:
1354
1355 /usr/local
1356 - /usr/local/doc
1357 /usr/local/bin
1358 + /var
1359 - /var
1360
1361 then --include-filelist list.txt would include /usr, /usr/local, and
1362 /usr/local/bin. It would exclude /usr/local/doc,
1363 /usr/local/doc/python, etc. It would also include /usr/local/man, as
1364 this is included within /user/local. Finally, it is undefined what
1365 happens with /var. A single file list should not contain conflicting
1366 file specifications.
1367
1368 Each line in the filelist will also be interpreted as a globbing
1369 pattern the way --include and --exclude options are interpreted. For
1370 instance, if the file "list.txt" contains the lines:
1371
1372 dir/foo
1373 + dir/bar
1374 - **
1375
1376 Then --include-filelist list.txt would be exactly the same as
1377 specifying --include dir/foo --include dir/bar --exclude ** on the
1378 command line.
1379
1380 Finally, the --include-regexp and --exclude-regexp options allow files
1381 to be included and excluded if their filenames match a python regular
1382 expression. Regular expression syntax is too complicated to explain
1383 here, but is covered in Python's library reference. Unlike the
1384 --include and --exclude options, the regular expression options don't
1385 match files containing or contained in matched files. So for instance
1386
1387 --include '[0-9]{7}(?!foo)'
1388
1389 matches any files whose full pathnames contain 7 consecutive digits
1390 which aren't followed by 'foo'. However, it wouldn't match /home even
1391 if /home/ben/1234567 existed.
1392
1393
1395 1. The API Keys used for Amazon Drive have not been granted
1396 production limits. Amazon do not say what the development
1397 limits are and are not replying to to requests to whitelist
1398 duplicity. A related tool, acd_cli, was demoted to development
1399 limits, but continues to work fine except for cases of excessive
1400 usage. If you experience throttling and similar issues with
1401 Amazon Drive using this backend, please report them to the
1402 mailing list.
1403
1404 2. If you previously used the acd+acdcli backend, it is strongly
1405 recommended to update to the ad backend instead, since it
1406 interfaces directly with Amazon Drive. You will need to setup
1407 the OAuth once again, but can otherwise keep your backups and
1408 config.
1409
1410
1412 When backing up to Amazon S3, two backend implementations are
1413 available. The schemes "s3" and "s3+http" are implemented using the
1414 older boto library, which has been deprecated and is no longer
1415 supported. The "boto3+s3" scheme is based on the newer boto3 library.
1416 This new backend fixes several known limitations in the older backend,
1417 which have crept in as Amazon S3 has evolved while the deprecated boto
1418 library has not kept up.
1419
1420 The boto3 backend should behave largely the same as the older S3
1421 backend, but there are some differences in the handling of some of the
1422 "S3" options. Additionally, there are some compatibility differences
1423 with the new backed. Because of these reasons, both backends have been
1424 retained for the time being. See the documentation for specific
1425 options regarding differences related to each backend.
1426
1427 The boto3 backend does not support bucket creation. This is a
1428 deliberate choice which simplifies the code, and side steps problems
1429 related to region selection. Additionally, it is probably not a good
1430 practice to give your backup role bucket creation rights. In most
1431 cases the role used for backups should probably be limited to specific
1432 buckets.
1433
1434 The boto3 backend only supports newer domain style buckets. Amazon is
1435 moving to deprecate the older bucket style, so migration is
1436 recommended. Use the older s3 backend for compatibility with backups
1437 stored in buckets using older naming conventions.
1438
1439 The boto3 backend does not currently support initiating restores from
1440 the glacier storage class. When restoring a backup from glacier or
1441 glacier deep archive, the backup files must first be restored out of
1442 band. There are multiple options when restoring backups from cold
1443 storage, which vary in both cost and speed. See Amazon's documentation
1444 for details.
1445
1446
1448 The Azure backend requires the Microsoft Azure Storage Blobs client
1449 library for Python to be installed on the system. See REQUIREMENTS.
1450
1451 It uses the environment variable AZURE_CONNECTION_STRING (required).
1452 This string contains all necessary information such as Storage Account
1453 name and the key for authentication. You can find it under Access Keys
1454 for the storage account.
1455
1456 Duplicity will take care to create the container when performing the
1457 backup. Do not create it manually before.
1458
1459 A container name (as given as the backup url) must be a valid DNS name,
1460 conforming to the following naming rules:
1461
1462
1463 1. Container names must start with a letter or number, and
1464 can contain only letters, numbers, and the dash (-)
1465 character.
1466
1467 2. Every dash (-) character must be immediately preceded and
1468 followed by a letter or number; consecutive dashes are
1469 not permitted in container names.
1470
1471 3. All letters in a container name must be lowercase.
1472
1473 4. Container names must be from 3 through 63 characters
1474 long.
1475
1476 These rules come from Azure; see https://docs.microsoft.com/en-
1477 us/rest/api/storageservices/naming-and-referencing-
1478 containers--blobs--and-metadata
1479
1480
1482 The box backend requires boxsdk with jwt support to be installed on the
1483 system. See REQUIREMENTS.
1484
1485 It uses the environment variable BOX_CONFIG_PATH (optional). This
1486 string contains the path to box custom app's config.json. Either this
1487 environment variable or the config query parameter in the url need to
1488 be specified, if both are specified, query paramter takes precedence.
1489
1490
1491 Create a Box custom app
1492 In order to use box backend, user need to create a box custom app in
1493 the box developer console (https://app.box.com/developers/console).
1494
1495 After create a new custom app, please make sure it is configured as
1496 follow:
1497
1498
1499 1. Choose "App Access Only" for "App Access Level"
1500
1501 2. Check "Write all files and folders stored in Box"
1502
1503 3. Generate a Public/Private Keypair
1504
1505 The user also need to grant the created custom app permission in the
1506 admin console (https://app.box.com/master/custom-apps) by clicking the
1507 "+" button and enter the client_id which can be found on the custom
1508 app's configuration page.
1509
1510
1512 Pyrax is Rackspace's next-generation Cloud management API, including
1513 Cloud Files access. The cfpyrax backend requires the pyrax library to
1514 be installed on the system. See REQUIREMENTS.
1515
1516 Cloudfiles is Rackspace's now deprecated implementation of OpenStack
1517 Object Storage protocol. Users wishing to use Duplicity with Rackspace
1518 Cloud Files should migrate to the new Pyrax plugin to ensure support.
1519
1520 The backend requires python-cloudfiles to be installed on the system.
1521 See REQUIREMENTS.
1522
1523 It uses three environment variables for authentification:
1524 CLOUDFILES_USERNAME (required), CLOUDFILES_APIKEY (required),
1525 CLOUDFILES_AUTHURL (optional)
1526
1527 If CLOUDFILES_AUTHURL is unspecified it will default to the value
1528 provided by python-cloudfiles, which points to rackspace, hence this
1529 value must be set in order to use other cloud files providers.
1530
1531
1533 1. First of all Dropbox backend requires valid authentication
1534 token. It should be passed via DPBX_ACCESS_TOKEN environment
1535 variable.
1536 To obtain it please create 'Dropbox API' application at:
1537 https://www.dropbox.com/developers/apps/create
1538 Then visit app settings and just use 'Generated access token'
1539 under OAuth2 section.
1540 Alternatively you can let duplicity generate access token
1541 itself. In such case temporary export DPBX_APP_KEY ,
1542 DPBX_APP_SECRET using values from app settings page and run
1543 duplicity interactively.
1544 It will print the URL that you need to open in the browser to
1545 obtain OAuth2 token for the application. Just follow on-screen
1546 instructions and then put generated token to DPBX_ACCESS_TOKEN
1547 variable. Once done, feel free to unset DPBX_APP_KEY and
1548 DPBX_APP_SECRET
1549
1550
1551 2. "some_dir" must already exist in the Dropbox folder. Depending
1552 on access token kind it may be:
1553 Full Dropbox: path is absolute and starts from 'Dropbox'
1554 root folder.
1555 App Folder: path is related to application folder.
1556 Dropbox client will show it in ~/Dropbox/Apps/<app-name>
1557
1558
1559 3. When using Dropbox for storage, be aware that all files,
1560 including the ones in the Apps folder, will be synced to all
1561 connected computers. You may prefer to use a separate Dropbox
1562 account specially for the backups, and not connect any computers
1563 to that account. Alternatively you can configure selective sync
1564 on all computers to avoid syncing of backup files
1565
1566
1568 Amazon S3 provides the ability to choose the location of a bucket upon
1569 its creation. The purpose is to enable the user to choose a location
1570 which is better located network topologically relative to the user,
1571 because it may allow for faster data transfers.
1572
1573 duplicity will create a new bucket the first time a bucket access is
1574 attempted. At this point, the bucket will be created in Europe if
1575 --s3-european-buckets was given. For reasons having to do with how the
1576 Amazon S3 service works, this also requires the use of the --s3-use-
1577 new-style option. This option turns on subdomain based bucket
1578 addressing in S3. The details are beyond the scope of this man page,
1579 but it is important to know that your bucket must not contain upper
1580 case letters or any other characters that are not valid parts of a
1581 hostname. Consequently, for reasons of backwards compatibility, use of
1582 subdomain based bucket addressing is not enabled by default.
1583
1584 Note that you will need to use --s3-use-new-style for all operations on
1585 European buckets; not just upon initial creation.
1586
1587 You only need to use --s3-european-buckets upon initial creation, but
1588 you may may use it at all times for consistency.
1589
1590 Further note that when creating a new European bucket, it can take a
1591 while before the bucket is fully accessible. At the time of this
1592 writing it is unclear to what extent this is an expected feature of
1593 Amazon S3, but in practice you may experience timeouts, socket errors
1594 or HTTP errors when trying to upload files to your newly created
1595 bucket. Give it a few minutes and the bucket should function normally.
1596
1597
1599 Filename prefixes can be used in multi backend with mirror mode to
1600 define affinity rules. They can also be used in conjunction with S3
1601 lifecycle rules to transition archive files to Glacier, while keeping
1602 metadata (signature and manifest files) on S3.
1603
1604 Duplicity does not require access to archive files except when
1605 restoring from backup.
1606
1607
1609 Support for Google Cloud Storage relies on its Interoperable Access,
1610 which must be enabled for your account. Once enabled, you can generate
1611 Interoperable Storage Access Keys and pass them to duplicity via the
1612 GS_ACCESS_KEY_ID and GS_SECRET_ACCESS_KEY environment variables.
1613 Alternatively, you can run gsutil config -a to have the Google Cloud
1614 Storage utility populate the ~/.boto configuration file.
1615
1616 Enable Interoperable Access:
1617 https://code.google.com/apis/console#:storage
1618 Create Access Keys:
1619 https://code.google.com/apis/console#:storage:legacy
1620
1621
1623 The hubic backend requires the pyrax library to be installed on the
1624 system. See REQUIREMENTS. You will need to set your credentials for
1625 hubiC in a file called ~/.hubic_credentials, following this pattern:
1626
1627 [hubic]
1628 email = your_email
1629 password = your_password
1630 client_id = api_client_id
1631 client_secret = api_secret_key
1632 redirect_uri = http://localhost/
1633
1634
1636 An IMAP account can be used as a target for the upload. The userid may
1637 be specified and the password will be requested.
1638
1639 The from_address_prefix may be specified (and probably should be). The
1640 text will be used as the "From" address in the IMAP server. Then on a
1641 restore (or list) command the from_address_prefix will distinguish
1642 between different backups.
1643
1644
1646 The multi backend allows duplicity to combine the storage available in
1647 more than one backend store (e.g., you can store across a google drive
1648 account and a onedrive account to get effectively the combined storage
1649 available in both). The URL path specifies a JSON formated config file
1650 containing a list of the backends it will use. The URL may also specify
1651 "query" parameters to configure overall behavior. Each element of the
1652 list must have a "url" element, and may also contain an optional
1653 "description" and an optional "env" list of environment variables used
1654 to configure that backend.
1655
1656 Query Parameters
1657 Query parameters come after the file URL in standard HTTP format for
1658 example:
1659 multi:///path/to/config.json?mode=mirror&onfail=abort
1660 multi:///path/to/config.json?mode=stripe&onfail=continue
1661 multi:///path/to/config.json?onfail=abort&mode=stripe
1662 multi:///path/to/config.json?onfail=abort
1663 Order does not matter, however unrecognized parameters are considered
1664 an error.
1665
1666 mode=stripe
1667 This mode (the default) performs round-robin access to the list
1668 of backends. In this mode, all backends must be reliable as a
1669 loss of one means a loss of one of the archive files.
1670
1671 mode=mirror
1672 This mode accesses backends as a RAID1-store, storing every file
1673 in every backend and reading files from the first-successful
1674 backend. A loss of any backend should result in no failure.
1675 Note that backends added later will only get new files and may
1676 require a manual sync with one of the other operating ones.
1677
1678 onfail=continue
1679 This setting (the default) continues all write operations in as
1680 best-effort. Any failure results in the next backend tried.
1681 Failure is reported only when all backends fail a given
1682 operation with the error result from the last failure.
1683
1684 onfail=abort
1685 This setting considers any backend write failure as a
1686 terminating condition and reports the error. Data reading and
1687 listing operations are independent of this and will try with the
1688 next backend on failure.
1689
1690 JSON File Example
1691 [
1692 {
1693 "description": "a comment about the backend"
1694 "url": "abackend://myuser@domain.com/backup",
1695 "env": [
1696 {
1697 "name" : "MYENV",
1698 "value" : "xyz"
1699 },
1700 {
1701 "name" : "FOO",
1702 "value" : "bar"
1703 }
1704 ],
1705 "prefixes": ["prefix1_", "prefix2_"]
1706 },
1707 {
1708 "url": "file:///path/to/dir"
1709 }
1710 ]
1711
1712
1714 Par2 Wrapper Backend can be used in combination with all other backends
1715 to create recovery files. Just add par2+ before a regular scheme (e.g.
1716 par2+ftp://user@host/dir or par2+s3+http://bucket_name ). This will
1717 create par2 recovery files for each archive and upload them all to the
1718 wrapped backend.
1719
1720 Before restoring, archives will be verified. Corrupt archives will be
1721 repaired on the fly if there are enough recovery blocks available.
1722
1723 Use --par2-redundancy percent to adjust the size (and redundancy) of
1724 recovery files in percent.
1725
1726
1728 The pydrive backend requires Python PyDrive package to be installed on
1729 the system. See REQUIREMENTS.
1730
1731 There are two ways to use PyDrive: with a regular account or with a
1732 "service account". With a service account, a separate account is
1733 created, that is only accessible with Google APIs and not a web login.
1734 With a regular account, you can store backups in your normal Google
1735 Drive.
1736
1737 To use a service account, go to the Google developers console at
1738 https://console.developers.google.com. Create a project, and make sure
1739 Drive API is enabled for the project. Under "APIs and auth", click
1740 Create New Client ID, then select Service Account with P12 key.
1741
1742 Download the .p12 key file of the account and convert it to the .pem
1743 format:
1744 openssl pkcs12 -in XXX.p12 -nodes -nocerts > pydriveprivatekey.pem
1745
1746 The content of .pem file should be passed to GOOGLE_DRIVE_ACCOUNT_KEY
1747 environment variable for authentification.
1748
1749 The email address of the account will be used as part of URL. See URL
1750 FORMAT above.
1751
1752 The alternative is to use a regular account. To do this, start as
1753 above, but when creating a new Client ID, select "Installed
1754 application" of type "Other". Create a file with the following content,
1755 and pass its filename in the GOOGLE_DRIVE_SETTINGS environment
1756 variable:
1757
1758 client_config_backend: settings
1759 client_config:
1760 client_id: <Client ID from developers' console>
1761 client_secret: <Client secret from developers' console>
1762 save_credentials: True
1763 save_credentials_backend: file
1764 save_credentials_file: <filename to cache credentials>
1765 get_refresh_token: True
1766
1767 In this scenario, the username and host parts of the URL play no role;
1768 only the path matters. During the first run, you will be prompted to
1769 visit an URL in your browser to grant access to your drive. Once
1770 granted, you will receive a verification code to paste back into
1771 Duplicity. The credentials are then cached in the file references above
1772 for future use.
1773
1774
1776 GDrive: is a rewritten PyDrive: backend with less dependencies, and a
1777 simpler setup - it uses the JSON keys downloaded directly from Google
1778 Cloud Console.
1779
1780 There are two ways to use GDrive: with a regular account or with a
1781 "service account". With a service account, a separate account is
1782 created, that is only accessible with Google APIs and not a web login.
1783 With a regular account, you can store backups in your normal Google
1784 Drive.
1785
1786 To use a service account, go to the Google developers console at
1787 https://console.developers.google.com. Create a project, and make sure
1788 Drive API is enabled for the project. In the "Credentials" section,
1789 click "Create credentials", then select Service Account with JSON key.
1790
1791 The GOOGLE_SERVICE_JSON_FILE environment variable needs to contain the
1792 path to the JSON file on duplicity invocation.
1793
1794 The alternative is to use a regular account. To do this, start as
1795 above, but when creating a new Client ID, select "Create OAuth client
1796 ID", with application type of "Desktop app". Download the
1797 client_secret.json file for the new client, and set the
1798 GOOGLE_CLIENT_SECRET_JSON_FILE environment variable to the path to this
1799 file, and GOOGLE_CREDENTIALS_FILE to a path to a file where duplicity
1800 will keep the authentication token - this location must be writable.
1801
1802 During the first run, you will be prompted to visit an URL in your
1803 browser to grant access to your drive. Once granted, you will receive a
1804 verification code to paste back into Duplicity. The credentials are
1805 then cached in the file references above for future use.
1806
1807 As a sanity check, GDrive checks the host and username from the URL
1808 against the JSON key, and refuses to proceed if the addresses do not
1809 match. Either the email (for the service accounts) or Client ID (for
1810 regular OAuth accounts) must be present in the URL. See URL FORMAT
1811 above.
1812
1813
1815 Rclone is a powerful command line program to sync files and directories
1816 to and from various cloud storage providers.
1817
1818 Once you have configured an rclone remote via
1819
1820 rclone config
1821
1822 and successfully set up a remote (e.g. gdrive for Google Drive),
1823 assuming you can list your remote files with
1824
1825 rclone ls gdrive:mydocuments
1826
1827 you can start your backup with
1828
1829 duplicity /mydocuments rclone://gdrive:/mydocuments
1830
1831 Please note the slash after the second colon. Some storage provider
1832 will work with or without slash after colon, but some other will not.
1833 Since duplicity will complain about malformed URL if a slash is not
1834 present, always put it after the colon, and the backend will handle it
1835 for you.
1836
1837
1839 The ssh backends support sftp and scp/ssh transport protocols. This is
1840 a known user-confusing issue as these are fundamentally different. If
1841 you plan to access your backend via one of those please inform yourself
1842 about the requirements for a server to support sftp or scp/ssh access.
1843 To make it even more confusing the user can choose between several ssh
1844 backends via a scheme prefix: paramiko+ (default), pexpect+, lftp+... .
1845 paramiko & pexpect support --use-scp, --ssh-askpass and --ssh-options.
1846 Only the pexpect backend allows to define --scp-command and --sftp-
1847 command.
1848
1849 SSH paramiko backend (default) is a complete reimplementation of ssh
1850 protocols natively in python. Advantages are speed and maintainability.
1851 Minor disadvantage is that extra packages are needed as listed in
1852 REQUIREMENTS. In sftp (default) mode all operations are done via the
1853 according sftp commands. In scp mode ( --use-scp ) though scp access is
1854 used for put/get operations but listing is done via ssh remote shell.
1855
1856 SSH pexpect backend is the legacy ssh backend using the command line
1857 ssh binaries via pexpect. Older versions used scp for get and put
1858 operations and sftp for list and delete operations. The current
1859 version uses sftp for all four supported operations, unless the --use-
1860 scp option is used to revert to old behavior.
1861
1862 SSH lftp backend is simply there because lftp can interact with the ssh
1863 cmd line binaries. It is meant as a last resort in case the above
1864 options fail for some reason.
1865
1866 Why use sftp instead of scp? The change to sftp was made in order to
1867 allow the remote system to chroot the backup, thus providing better
1868 security and because it does not suffer from shell quoting issues like
1869 scp. Scp also does not support any kind of file listing, so sftp or
1870 ssh access will always be needed in addition for this backend mode to
1871 work properly. Sftp does not have these limitations but needs an sftp
1872 service running on the backend server, which is sometimes not an
1873 option.
1874
1875
1877 Certificate verification as implemented right now [02.2016] only in the
1878 webdav and lftp backends. older pythons 2.7.8- and older lftp binaries
1879 need a file based database of certification authority certificates
1880 (cacert file).
1881 Newer python 2.7.9+ and recent lftp versions however support the system
1882 default certificates (usually in /etc/ssl/certs) and also giving an
1883 alternative ca cert folder via --ssl-cacert-path.
1884
1885 The cacert file has to be a PEM formatted text file as currently
1886 provided by the CURL project. See
1887
1888 http://curl.haxx.se/docs/caextract.html
1889
1890 After creating/retrieving a valid cacert file you should copy it to
1891 either
1892
1893 ~/.duplicity/cacert.pem
1894 ~/duplicity_cacert.pem
1895 /etc/duplicity/cacert.pem
1896
1897 Duplicity searches it there in the same order and will fail if it can't
1898 find it. You can however specify the option --ssl-cacert-file <file>
1899 to point duplicity to a copy in a different location.
1900
1901 Finally there is the --ssl-no-check-certificate option to disable
1902 certificate verification alltogether, in case some ssl library is
1903 missing or verification is not wanted. Use it with care, as even with
1904 self signed servers manually providing the private ca certificate is
1905 definitely the safer option.
1906
1907
1909 Swift is the OpenStack Object Storage service.
1910 The backend requires python-switclient to be installed on the system.
1911 python-keystoneclient is also needed to use OpenStack's Keystone
1912 Identity service. See REQUIREMENTS.
1913
1914 It uses following environment variables for authentification:
1915 SWIFT_USERNAME (required), SWIFT_PASSWORD (required), SWIFT_AUTHURL
1916 (required), SWIFT_USERID (required, only for IBM Bluemix
1917 ObjectStorage), SWIFT_TENANTID (required, only for IBM Bluemix
1918 ObjectStorage), SWIFT_REGIONNAME (required, only for IBM Bluemix
1919 ObjectStorage), SWIFT_TENANTNAME (optional, the tenant can be included
1920 in the username)
1921
1922 If the user was previously authenticated, the following environment
1923 variables can be used instead: SWIFT_PREAUTHURL (required),
1924 SWIFT_PREAUTHTOKEN (required)
1925
1926 If SWIFT_AUTHVERSION is unspecified, it will default to version 1.
1927
1928
1930 PCA is a long-term data archival solution by OVH. It runs a slightly
1931 modified version of Openstack Swift introducing latency in the data
1932 retrieval process. It is a good pick for a multi backend configuration
1933 where receiving volumes while an other backend is used to store
1934 manifests and signatures.
1935
1936 The backend requires python-switclient to be installed on the system.
1937 python-keystoneclient is also needed to interact with OpenStack's
1938 Keystone Identity service. See REQUIREMENTS.
1939
1940 It uses following environment variables for authentification:
1941 PCA_USERNAME (required), PCA_PASSWORD (required), PCA_AUTHURL
1942 (required), PCA_USERID (optional), PCA_TENANTID (optional, but either
1943 the tenant name or tenant id must be supplied) PCA_REGIONNAME
1944 (optional), PCA_TENANTNAME (optional, but either the tenant name or
1945 tenant id must be supplied)
1946
1947 If the user was previously authenticated, the following environment
1948 variables can be used instead: PCA_PREAUTHURL (required),
1949 PCA_PREAUTHTOKEN (required)
1950
1951 If PCA_AUTHVERSION is unspecified, it will default to version 2.
1952
1953
1955 This backend requires mediafire python library to be installed on the
1956 system. See REQUIREMENTS.
1957
1958 Use URL escaping for username (and password, if provided via command
1959 line):
1960
1961
1962 mf://duplicity%40example.com@mediafire.com/some_folder
1963
1964 The destination folder will be created for you if it does not exist.
1965
1966
1968 Signing and symmetrically encrypt at the same time with the gpg binary
1969 on the command line, as used within duplicity, is a specifically
1970 challenging issue. Tests showed that the following combinations proved
1971 working.
1972
1973 1. Setup gpg-agent properly. Use the option --use-agent and enter both
1974 passphrases (symmetric and sign key) in the gpg-agent's dialog.
1975
1976 2. Use a PASSPHRASE for symmetric encryption of your choice but the
1977 signing key has an empty passphrase.
1978
1979 3. The used PASSPHRASE for symmetric encryption and the passphrase of
1980 the signing key are identical.
1981
1982
1984 Hard links currently unsupported (they will be treated as non-linked
1985 regular files).
1986
1987 Bad signatures will be treated as empty instead of logging appropriate
1988 error message.
1989
1990
1992 This section describes duplicity's basic operation and the format of
1993 its data files. It should not necessary to read this section to use
1994 duplicity.
1995
1996 The files used by duplicity to store backup data are tarfiles in GNU
1997 tar format. They can be produced independently by rdiffdir(1). For
1998 incremental backups, new files are saved normally in the tarfile. But
1999 when a file changes, instead of storing a complete copy of the file,
2000 only a diff is stored, as generated by rdiff(1). If a file is deleted,
2001 a 0 length file is stored in the tar. It is possible to restore a
2002 duplicity archive "manually" by using tar and then cp, rdiff, and rm as
2003 necessary. These duplicity archives have the extension difftar.
2004
2005 Both full and incremental backup sets have the same format. In effect,
2006 a full backup set is an incremental one generated from an empty
2007 signature (see below). The files in full backup sets will start with
2008 duplicity-full while the incremental sets start with duplicity-inc.
2009 When restoring, duplicity applies patches in order, so deleting, for
2010 instance, a full backup set may make related incremental backup sets
2011 unusable.
2012
2013 In order to determine which files have been deleted, and to calculate
2014 diffs for changed files, duplicity needs to process information about
2015 previous sessions. It stores this information in the form of tarfiles
2016 where each entry's data contains the signature (as produced by rdiff)
2017 of the file instead of the file's contents. These signature sets have
2018 the extension sigtar.
2019
2020 Signature files are not required to restore a backup set, but without
2021 an up-to-date signature, duplicity cannot append an incremental backup
2022 to an existing archive.
2023
2024 To save bandwidth, duplicity generates full signature sets and
2025 incremental signature sets. A full signature set is generated for each
2026 full backup, and an incremental one for each incremental backup. These
2027 start with duplicity-full-signatures and duplicity-new-signatures
2028 respectively. These signatures will be stored both locally and
2029 remotely. The remote signatures will be encrypted if encryption is
2030 enabled. The local signatures will not be encrypted and stored in the
2031 archive dir (see --archive-dir ).
2032
2033
2035 Duplicity requires a POSIX-like operating system with a python
2036 interpreter version 2.6+ installed. It is best used under GNU/Linux.
2037
2038 Some backends also require additional components (probably available as
2039 packages for your specific platform):
2040
2041 Amazon Drive backend
2042 python-requests - http://python-requests.org
2043 python-requests-oauthlib - https://github.com/requests/requests-
2044 oauthlib
2045
2046 azure backend (Azure Storage Blob Service)
2047 Microsoft Azure Storage Blobs client library for Python -
2048 https://pypi.org/project/azure-storage-blob/
2049
2050 boto backend (S3 Amazon Web Services, Google Cloud Storage)
2051 boto version 2.0+ - http://github.com/boto/boto
2052
2053 box backend (box.com)
2054 boxsdk - https://github.com/box/box-python-sdk
2055
2056 cfpyrax backend (Rackspace Cloud) and hubic backend (hubic.com)
2057 Rackspace CloudFiles Pyrax API -
2058 http://docs.rackspace.com/sdks/guide/content/python.html
2059
2060 dpbx backend (Dropbox)
2061 Dropbox Python SDK -
2062 https://www.dropbox.com/developers/reference/sdk
2063
2064 gdocs gdata backend (legacy Google Docs backend)
2065 Google Data APIs Python Client Library -
2066 http://code.google.com/p/gdata-python-client/
2067
2068 gdocs pydrive backend(default)
2069 see pydrive backend
2070
2071 gio backend (Gnome VFS API)
2072 PyGObject - http://live.gnome.org/PyGObject
2073 D-Bus (dbus)- http://www.freedesktop.org/wiki/Software/dbus
2074
2075 lftp backend (needed for ftp, ftps, fish [over ssh] - also supports
2076 sftp, webdav[s])
2077 LFTP Client - http://lftp.yar.ru/
2078
2079 MEGA backend (only works for accounts created prior to November 2018)
2080 (mega.nz)
2081 megatools client - https://github.com/megous/megatools
2082
2083 MEGA v2 and v3 backend (works for all MEGA accounts) (mega.nz)
2084 MEGAcmd client - https://mega.nz/cmd
2085
2086 multi backend
2087 Multi -- store to more than one backend
2088 (also see A NOTE ON MULTI BACKEND ) below.
2089
2090 ncftp backend (ftp, select via ncftp+ftp://)
2091 NcFTP - http://www.ncftp.com/
2092
2093 OneDrive backend (Microsoft OneDrive)
2094 python-requests - http://python-requests.org
2095 python-requests-oauthlib - https://github.com/requests/requests-
2096 oauthlib
2097
2098 Par2 Wrapper Backend
2099 par2cmdline - http://parchive.sourceforge.net/
2100
2101 pydrive backend
2102 PyDrive -- a wrapper library of google-api-python-client -
2103 https://pypi.python.org/pypi/PyDrive
2104 (also see A NOTE ON PYDRIVE BACKEND ) below.
2105
2106 rclone backend
2107 rclone - https://rclone.org/
2108
2109 rsync backend
2110 rsync client binary - http://rsync.samba.org/
2111
2112 ssh paramiko backend (default)
2113 paramiko (SSH2 for python) -
2114 http://pypi.python.org/pypi/paramiko (downloads);
2115 http://github.com/paramiko/paramiko (project page)
2116 pycrypto (Python Cryptography Toolkit) -
2117 http://www.dlitz.net/software/pycrypto/
2118
2119 ssh pexpect backend
2120 sftp/scp client binaries OpenSSH - http://www.openssh.com/
2121 Python pexpect module -
2122 http://pexpect.sourceforge.net/pexpect.html
2123
2124 swift backend (OpenStack Object Storage)
2125 Python swiftclient module - https://github.com/openstack/python-
2126 swiftclient/
2127 Python keystoneclient module -
2128 https://github.com/openstack/python-keystoneclient/
2129
2130 webdav backend
2131 certificate authority database file for ssl certificate
2132 verification of HTTPS connections -
2133 http://curl.haxx.se/docs/caextract.html
2134 (also see A NOTE ON SSL CERTIFICATE VERIFICATION).
2135 Python kerberos module for kerberos authentication -
2136 https://github.com/02strich/pykerberos
2137
2138 MediaFire backend
2139 MediaFire Python Open SDK -
2140 https://pypi.python.org/pypi/mediafire/
2141
2142
2144 Original Author - Ben Escoto <bescoto@stanford.edu>
2145
2146 Current Maintainer - Kenneth Loafman <kenneth@loafman.com>
2147
2148 Continuous Contributors
2149 Edgar Soldin, Mike Terry
2150
2151 Most backends were contributed individually. Information about their
2152 authorship may be found in the according file's header.
2153
2154 Also we'd like to thank everybody posting issues to the mailing list or
2155 on launchpad, sending in patches or contributing otherwise. Duplicity
2156 wouldn't be as stable and useful if it weren't for you.
2157
2158 A special thanks goes to rsync.net, a Cloud Storage provider with
2159 explicit support for duplicity, for several monetary donations and for
2160 providing a special "duplicity friends" rate for their offsite backup
2161 service. Email info@rsync.net for details.
2162
2163
2165 rdiffdir(1), python(1), rdiff(1), rdiff-backup(1).
2166
2167
2168
2169Version 0.8.19 April 29, 2021 DUPLICITY(1)