1DUPLICITY(1) User Manuals DUPLICITY(1)
2
3
4
6 duplicity - Encrypted incremental backup to local or remote storage.
7
8
10 For detailed descriptions for each command see chapter ACTIONS.
11
12 duplicity [full|incremental] [options] source_directory target_url
13
14 duplicity verify [options] [--compare-data] [--file-to-restore
15 <relpath>] [--time time] source_url target_directory
16
17 duplicity collection-status [options] [--file-changed <relpath>]
18 target_url
19
20 duplicity list-current-files [options] [--time time] target_url
21
22 duplicity [restore] [options] [--file-to-restore <relpath>] [--time
23 time] source_url target_directory
24
25 duplicity remove-older-than <time> [options] [--force] target_url
26
27 duplicity remove-all-but-n-full <count> [options] [--force] target_url
28
29 duplicity remove-all-inc-of-but-n-full <count> [options] [--force]
30 target_url
31
32 duplicity cleanup [options] [--force] target_url
33
34 duplicity replicate [options] [--time time] source_url target_url
35
36
38 Duplicity incrementally backs up files and folders into tar-format
39 volumes encrypted with GnuPG and places them to a remote (or local)
40 storage backend. See chapter URL FORMAT for a list of all supported
41 backends and how to address them. Because duplicity uses librsync,
42 incremental backups are space efficient and only record the parts of
43 files that have changed since the last backup. Currently duplicity
44 supports deleted files, full Unix permissions, uid/gid, directories,
45 symbolic links, fifos, etc., but not hard links.
46
47 If you are backing up the root directory /, remember to --exclude
48 /proc, or else duplicity will probably crash on the weird stuff in
49 there.
50
51
53 Here is an example of a backup, using sftp to back up /home/me to
54 some_dir on the other.host machine:
55
56 duplicity /home/me sftp://uid@other.host/some_dir
57
58 If the above is run repeatedly, the first will be a full backup, and
59 subsequent ones will be incremental. To force a full backup, use the
60 full action:
61
62 duplicity full /home/me sftp://uid@other.host/some_dir
63
64 or enforcing a full every other time via --full-if-older-than <time> ,
65 e.g. a full every month:
66
67 duplicity --full-if-older-than 1M /home/me
68 sftp://uid@other.host/some_dir
69
70 Now suppose we accidentally delete /home/me and want to restore it the
71 way it was at the time of last backup:
72
73 duplicity sftp://uid@other.host/some_dir /home/me
74
75 Duplicity enters restore mode because the URL comes before the local
76 directory. If we wanted to restore just the file "Mail/article" in
77 /home/me as it was three days ago into /home/me/restored_file:
78
79 duplicity -t 3D --file-to-restore Mail/article
80 sftp://uid@other.host/some_dir /home/me/restored_file
81
82 The following command compares the latest backup with the current
83 files:
84
85 duplicity verify sftp://uid@other.host/some_dir /home/me
86
87 Finally, duplicity recognizes several include/exclude options. For
88 instance, the following will backup the root directory, but exclude
89 /mnt, /tmp, and /proc:
90
91 duplicity --exclude /mnt --exclude /tmp --exclude /proc /
92 file:///usr/local/backup
93
94 Note that in this case the destination is the local directory
95 /usr/local/backup. The following will backup only the /home and /etc
96 directories under root:
97
98 duplicity --include /home --include /etc --exclude '**' /
99 file:///usr/local/backup
100
101 Duplicity can also access a repository via ftp. If a user name is
102 given, the environment variable FTP_PASSWORD is read to determine the
103 password:
104
105 FTP_PASSWORD=mypassword duplicity /local/dir
106 ftp://user@other.host/some_dir
107
108
110 Duplicity knows action commands, which can be finetuned with options.
111 The actions for backup (full,incr) and restoration (restore) can as
112 well be left out as duplicity detects in what mode it should switch to
113 by the order of target URL and local folder. If the target URL comes
114 before the local folder a restore is in order, is the local folder
115 before target URL then this folder is about to be backed up to the
116 target URL.
117 If a backup is in order and old signatures can be found duplicity
118 automatically performs an incremental backup.
119
120 Note: The following explanations explain some but not all options that
121 can be used in connection with that action command. Consult the
122 OPTIONS section for more detailed informations.
123
124
125 full <folder> <url>
126 Perform a full backup. A new backup chain is started even if
127 signatures are available for an incremental backup.
128
129
130 incr <folder> <url>
131 If this is requested an incremental backup will be performed.
132 Duplicity will abort if no old signatures can be found.
133
134
135 verify [--compare-data] [--time <time>] [--file-to-restore <rel_path>]
136 <url> <local_path>
137 Verify tests the integrity of the backup archives at the remote
138 location by downloading each file and checking both that it can
139 restore the archive and that the restored file matches the
140 signature of that file stored in the backup, i.e. compares the
141 archived file with its hash value from archival time. Verify
142 does not actually restore and will not overwrite any local
143 files. Duplicity will exit with a non-zero error level if any
144 files do not match the signature stored in the archive for that
145 file. On verbosity level 4 or higher, it will log a message for
146 each file that differs from the stored signature. Files must be
147 downloaded to the local machine in order to compare them.
148 Verify does not compare the backed-up version of the file to the
149 current local copy of the files unless the --compare-data option
150 is used (see below).
151 The --file-to-restore option restricts verify to that file or
152 folder. The --time option allows to select a backup to verify.
153 The --compare-data option enables data comparison (see below).
154
155
156 collection-status [--file-changed <relpath>]<url>
157 Summarize the status of the backup repository by printing the
158 chains and sets found, and the number of volumes in each.
159
160
161 list-current-files [--time <time>] <url>
162 Lists the files contained in the most current backup or backup
163 at time. The information will be extracted from the signature
164 files, not the archive data itself. Thus the whole archive does
165 not have to be downloaded, but on the other hand if the archive
166 has been deleted or corrupted, this command will not detect it.
167
168
169 restore [--file-to-restore <relpath>] [--time <time>] <url>
170 <target_folder>
171 You can restore the full monty or selected folders/files from a
172 specific time. Use the relative path as it is printed by list-
173 current-files. Usually not needed as duplicity enters restore
174 mode when it detects that the URL comes before the local folder.
175
176
177 remove-older-than <time> [--force] <url>
178 Delete all backup sets older than the given time. Old backup
179 sets will not be deleted if backup sets newer than time depend
180 on them. See the TIME FORMATS section for more information.
181 Note, this action cannot be combined with backup or other
182 actions, such as cleanup. Note also that --force will be needed
183 to delete the files instead of just listing them.
184
185
186 remove-all-but-n-full <count> [--force] <url>
187 Delete all backups sets that are older than the count:th last
188 full backup (in other words, keep the last count full backups
189 and associated incremental sets). count must be larger than
190 zero. A value of 1 means that only the single most recent backup
191 chain will be kept. Note that --force will be needed to delete
192 the files instead of just listing them.
193
194
195 remove-all-inc-of-but-n-full <count> [--force] <url>
196 Delete incremental sets of all backups sets that are older than
197 the count:th last full backup (in other words, keep only old
198 full backups and not their increments). count must be larger
199 than zero. A value of 1 means that only the single most recent
200 backup chain will be kept intact. Note that --force will be
201 needed to delete the files instead of just listing them.
202
203
204 cleanup [--force] <url>
205 Delete the extraneous duplicity files on the given backend.
206 Non-duplicity files, or files in complete data sets will not be
207 deleted. This should only be necessary after a duplicity
208 session fails or is aborted prematurely. Note that --force will
209 be needed to delete the files instead of just listing them.
210
211
212 replicate [--time time] <source_url> <target_url>
213 Replicate backup sets from source to target backend. Files will
214 be (re)-encrypted and (re)-compressed depending on normal
215 backend options. Signatures and volumes will not get recomputed,
216 thus options like --volsize or --max-blocksize have no effect.
217 When --time time is given, only backup sets older than time will
218 be replicated.
219
220
222 --allow-source-mismatch
223 Do not abort on attempts to use the same archive dir or remote
224 backend to back up different directories. duplicity will tell
225 you if you need this switch.
226
227
228 --archive-dir path
229 The archive directory. NOTE: This option changed in 0.6.0. The
230 archive directory is now necessary in order to manage
231 persistence for current and future enhancements. As such, this
232 option is now used only to change the location of the archive
233 directory. The archive directory should not be deleted, or
234 duplicity will have to recreate it from the remote repository
235 (which may require decrypting the backup contents).
236
237 When backing up or restoring, this option specifies that the
238 local archive directory is to be created in path. If the
239 archive directory is not specified, the default will be to
240 create the archive directory in ~/.cache/duplicity/.
241
242 The archive directory can be shared between backups to multiple
243 targets, because a subdirectory of the archive dir is used for
244 individual backups (see --name ).
245
246 The combination of archive directory and backup name must be
247 unique in order to separate the data of different backups.
248
249 The interaction between the --archive-dir and the --name options
250 allows for four possible combinations for the location of the
251 archive dir:
252
253
254 1. neither specified (default)
255 ~/.cache/duplicity/hash-of-url
256
257 2. --archive-dir=/arch, no --name
258 /arch/hash-of-url
259
260 3. no --archive-dir, --name=foo
261 ~/.cache/duplicity/foo
262
263 4. --archive-dir=/arch, --name=foo
264 /arch/foo
265
266
267 --asynchronous-upload
268 (EXPERIMENTAL) Perform file uploads asynchronously in the
269 background, with respect to volume creation. This means that
270 duplicity can upload a volume while, at the same time, preparing
271 the next volume for upload. The intended end-result is a faster
272 backup, because the local CPU and your bandwidth can be more
273 consistently utilized. Use of this option implies additional
274 need for disk space in the temporary storage location; rather
275 than needing to store only one volume at a time, enough storage
276 space is required to store two volumes.
277
278
279 --backend-retry-delay number
280 Specifies the number of seconds that duplicity waits after an
281 error has occured before attempting to repeat the operation.
282
283
284
285 --cf-backend backend
286 Allows the explicit selection of a cloudfiles backend. Defaults
287 to pyrax. Alternatively you might choose cloudfiles.
288
289
290 --b2-hide-files
291 Causes Duplicity to hide files in B2 instead of deleting them.
292 Useful in combination with B2's lifecycle rules.
293
294
295 --compare-data
296 Enable data comparison of regular files on action verify. This
297 conducts a verify as described above to verify the integrity of
298 the backup archives, but additionally compares restored files to
299 those in target_directory. Duplicity will not replace any files
300 in target_directory. Duplicity will exit with a non-zero error
301 level if the files do not correctly verify or if any files from
302 the archive differ from those in target_directory. On verbosity
303 level 4 or higher, it will log a message for each file that
304 differs from its equivalent in target_directory.
305
306
307 --copy-links
308 Resolve symlinks during backup. Enabling this will resolve &
309 back up the symlink's file/folder data instead of the symlink
310 itself, potentially increasing the size of the backup.
311
312
313 --dry-run
314 Calculate what would be done, but do not perform any backend
315 actions
316
317
318 --encrypt-key key-id
319 When backing up, encrypt to the given public key, instead of
320 using symmetric (traditional) encryption. Can be specified
321 multiple times. The key-id can be given in any of the formats
322 supported by GnuPG; see gpg(1), section "HOW TO SPECIFY A USER
323 ID" for details.
324
325
326
327 --encrypt-secret-keyring filename
328 This option can only be used with --encrypt-key, and changes the
329 path to the secret keyring for the encrypt key to filename This
330 keyring is not used when creating a backup. If not specified,
331 the default secret keyring is used which is usually located at
332 .gnupg/secring.gpg
333
334
335 --encrypt-sign-key key-id
336 Convenience parameter. Same as --encrypt-key key-id --sign-key
337 key-id.
338
339
340 --exclude shell_pattern
341 Exclude the file or files matched by shell_pattern. If a
342 directory is matched, then files under that directory will also
343 be matched. See the FILE SELECTION section for more
344 information.
345
346
347 --exclude-device-files
348 Exclude all device files. This can be useful for
349 security/permissions reasons or if duplicity is not handling
350 device files correctly.
351
352
353 --exclude-filelist filename
354 Excludes the files listed in filename, with each line of the
355 filelist interpreted according to the same rules as --include
356 and --exclude. See the FILE SELECTION section for more
357 information.
358
359
360 --exclude-if-present filename
361 Exclude directories if filename is present. Allows the user to
362 specify folders that they do not wish to backup by adding a
363 specified file (e.g. ".nobackup") instead of maintaining a
364 comprehensive exclude/include list.
365
366
367 --exclude-older-than time
368 Exclude any files whose modification date is earlier than the
369 specified time. This can be used to produce a partial backup
370 that contains only recently changed files. See the TIME FORMATS
371 section for more information.
372
373
374 --exclude-other-filesystems
375 Exclude files on file systems (identified by device number)
376 other than the file system the root of the source directory is
377 on.
378
379
380 --exclude-regexp regexp
381 Exclude files matching the given regexp. Unlike the --exclude
382 option, this option does not match files in a directory it
383 matches. See the FILE SELECTION section for more information.
384
385
386 --file-prefix, --file-prefix-manifest, --file-prefix-archive, --file-
387 prefix-signature
388 Adds a prefix to all files, manifest files, archive files,
389 and/or signature files.
390
391 The same set of prefixes must be passed in on backup and
392 restore.
393
394 If both global and type-specific prefixes are set, global prefix
395 will go before type-specific prefixes.
396
397 See also A NOTE ON FILENAME PREFIXES
398
399
400 --file-to-restore path
401 This option may be given in restore mode, causing only path to
402 be restored instead of the entire contents of the backup
403 archive. path should be given relative to the root of the
404 directory backed up.
405
406
407 --full-if-older-than time
408 Perform a full backup if an incremental backup is requested, but
409 the latest full backup in the collection is older than the given
410 time. See the TIME FORMATS section for more information.
411
412
413 --force
414 Proceed even if data loss might result. Duplicity will let the
415 user know when this option is required.
416
417
418 --ftp-passive
419 Use passive (PASV) data connections. The default is to use
420 passive, but to fallback to regular if the passive connection
421 fails or times out.
422
423
424 --ftp-regular
425 Use regular (PORT) data connections.
426
427
428 --gio Use the GIO backend and interpret any URLs as GIO would.
429
430
431 --hidden-encrypt-key key-id
432 Same as --encrypt-key, but it hides user's key id from encrypted
433 file. It uses the gpg's --hidden-recipient command to obfuscate
434 the owner of the backup. On restore, gpg will automatically try
435 all available secret keys in order to decrypt the backup. See
436 gpg(1) for more details.
437
438
439
440 --ignore-errors
441 Try to ignore certain errors if they happen. This option is only
442 intended to allow the restoration of a backup in the face of
443 certain problems that would otherwise cause the backup to fail.
444 It is not ever recommended to use this option unless you have a
445 situation where you are trying to restore from backup and it is
446 failing because of an issue which you want duplicity to ignore.
447 Even then, depending on the issue, this option may not have an
448 effect.
449
450 Please note that while ignored errors will be logged, there will
451 be no summary at the end of the operation to tell you what was
452 ignored, if anything. If this is used for emergency restoration
453 of data, it is recommended that you run the backup in such a way
454 that you can revisit the backup log (look for lines containing
455 the string IGNORED_ERROR).
456
457 If you ever have to use this option for reasons that are not
458 understood or understood but not your own responsibility, please
459 contact duplicity maintainers. The need to use this option under
460 production circumstances would normally be considered a bug.
461
462
463 --imap-full-address email_address
464 The full email address of the user name when logging into an
465 imap server. If not supplied just the user name part of the
466 email address is used.
467
468
469 --imap-mailbox option
470 Allows you to specify a different mailbox. The default is
471 "INBOX". Other languages may require a different mailbox than
472 the default.
473
474
475 --gpg-binary file_path
476 Allows you to force duplicity to use file_path as gpg command
477 line binary. Can be an absolute or relative file path or a file
478 name. Default value is 'gpg'. The binary will be localized via
479 the PATH environment variable.
480
481
482 --gpg-options options
483 Allows you to pass options to gpg encryption. The options list
484 should be of the form "--opt1 --opt2=parm" where the string is
485 quoted and the only spaces allowed are between options.
486
487
488 --include shell_pattern
489 Similar to --exclude but include matched files instead. Unlike
490 --exclude, this option will also match parent directories of
491 matched files (although not necessarily their contents). See
492 the FILE SELECTION section for more information.
493
494
495 --include-filelist filename
496 Like --exclude-filelist, but include the listed files instead.
497 See the FILE SELECTION section for more information.
498
499
500 --include-regexp regexp
501 Include files matching the regular expression regexp. Only
502 files explicitly matched by regexp will be included by this
503 option. See the FILE SELECTION section for more information.
504
505
506 --log-fd number
507 Write specially-formatted versions of output messages to the
508 specified file descriptor. The format used is designed to be
509 easily consumable by other programs.
510
511
512 --log-file filename
513 Write specially-formatted versions of output messages to the
514 specified file. The format used is designed to be easily
515 consumable by other programs.
516
517
518 --max-blocksize number
519 determines the number of the blocks examined for changes during
520 the diff process. For files < 1MB the blocksize is a constant
521 of 512. For files over 1MB the size is given by:
522
523 file_blocksize = int((file_len / (2000 * 512)) * 512)
524 return min(file_blocksize, config.max_blocksize)
525
526 where config.max_blocksize defaults to 2048. If you specify a
527 larger max_blocksize, your difftar files will be larger, but
528 your sigtar files will be smaller. If you specify a smaller
529 max_blocksize, the reverse occurs. The --max-blocksize option
530 should be in multiples of 512.
531
532
533 --name symbolicname
534 Set the symbolic name of the backup being operated on. The
535 intent is to use a separate name for each logically distinct
536 backup. For example, someone may use "home_daily_s3" for the
537 daily backup of a home directory to Amazon S3. The structure of
538 the name is up to the user, it is only important that the names
539 be distinct. The symbolic name is currently only used to affect
540 the expansion of --archive-dir , but may be used for additional
541 features in the future. Users running more than one distinct
542 backup are encouraged to use this option.
543
544 If not specified, the default value is a hash of the backend
545 URL.
546
547
548 --no-compression
549 Do not use GZip to compress files on remote system.
550
551
552 --no-encryption
553 Do not use GnuPG to encrypt files on remote system.
554
555
556 --no-print-statistics
557 By default duplicity will print statistics about the current
558 session after a successful backup. This switch disables that
559 behavior.
560
561
562 --null-separator
563 Use nulls (\0) instead of newlines (\n) as line separators,
564 which may help when dealing with filenames containing newlines.
565 This affects the expected format of the files specified by the
566 --{include|exclude}-filelist switches as well as the format of
567 the directory statistics file.
568
569
570 --numeric-owner
571 On restore always use the numeric uid/gid from the archive and
572 not the archived user/group names, which is the default
573 behaviour. Recommended for restoring from live cds which might
574 have the users with identical names but different uids/gids.
575
576
577 --do-not-restore-ownership
578 Ignores the uid/gid from the archive and keeps the current
579 user's one. Recommended for restoring data to mounted
580 filesystem which do not support Unix ownership or when root
581 privileges are not available.
582
583
584 --num-retries number
585 Number of retries to make on errors before giving up.
586
587
588 --old-filenames
589 Use the old filename format (incompatible with Windows/Samba)
590 rather than the new filename format.
591
592
593 --par2-options options
594 Verbatim options to pass to par2.
595
596
597 --par2-redundancy percent
598 Adjust the level of redundancy in percent for Par2 recovery
599 files (default 10%).
600
601
602 --progress
603 When selected, duplicity will output the current upload progress
604 and estimated upload time. To annotate changes, it will perform
605 a first dry-run before a full or incremental, and then runs the
606 real operation estimating the real upload progress.
607
608
609 --progress-rate number
610 Sets the update rate at which duplicity will output the upload
611 progress messages (requires --progress option). Default is to
612 print the status each 3 seconds.
613
614
615 --rename <original path> <new path>
616 Treats the path orig in the backup as if it were the path new.
617 Can be passed multiple times. An example:
618
619 duplicity restore --rename Documents/metal Music/metal
620 sftp://uid@other.host/some_dir /home/me
621
622
623 --rsync-options options
624 Allows you to pass options to the rsync backend. The options
625 list should be of the form "opt1=parm1 opt2=parm2" where the
626 option string is quoted and the only spaces allowed are between
627 options. The option string will be passed verbatim to rsync,
628 after any internally generated option designating the remote
629 port to use. Here is a possibly useful example:
630
631 duplicity --rsync-options="--partial-dir=.rsync-partial"
632 /home/me rsync://uid@other.host/some_dir
633
634
635 --s3-european-buckets
636 When using the Amazon S3 backend, create buckets in Europe
637 instead of the default (requires --s3-use-new-style ). Also see
638 the EUROPEAN S3 BUCKETS section.
639
640 This option does not apply when using the newer boto3 backend,
641 which does not create buckets.
642
643 See also A NOTE ON AMAZON S3 below.
644
645
646 --s3-unencrypted-connection
647 Don't use SSL for connections to S3.
648
649 This may be much faster, at some cost to confidentiality.
650
651 With this option, anyone who can observe traffic between your
652 computer and S3 will be able to tell: that you are using
653 Duplicity, the name of the bucket, your AWS Access Key ID, the
654 increment dates and the amount of data in each increment.
655
656 This option affects only the connection, not the GPG encryption
657 of the backup increment files. Unless that is disabled, an
658 observer will not be able to see the file names or contents.
659
660 This option is not available when using the newer boto3 backend.
661
662 See also A NOTE ON AMAZON S3 below.
663
664
665 --s3-use-new-style
666 When operating on Amazon S3 buckets, use new-style subdomain
667 bucket addressing. This is now the preferred method to access
668 Amazon S3, but is not backwards compatible if your bucket name
669 contains upper-case characters or other characters that are not
670 valid in a hostname.
671
672 This option has no effect when using the newer boto3 backend,
673 which will always use new style subdomain bucket naming.
674
675 See also A NOTE ON AMAZON S3 below.
676
677
678 --s3-use-rrs
679 Store volumes using Reduced Redundancy Storage when uploading to
680 Amazon S3. This will lower the cost of storage but also lower
681 the durability of stored volumes to 99.99% instead the
682 99.999999999% durability offered by Standard Storage on S3.
683
684
685 --s3-use-ia
686 Store volumes using Standard - Infrequent Access when uploading
687 to Amazon S3. This storage class has a lower storage cost but a
688 higher per-request cost, and the storage cost is calculated
689 against a 30-day storage minimum. According to Amazon, this
690 storage is ideal for long-term file storage, backups, and
691 disaster recovery.
692
693
694 --s3-use-onezone-ia
695 Store volumes using One Zone - Infrequent Access when uploading
696 to Amazon S3. This storage is similar to Standard - Infrequent
697 Access, but only stores object data in one Availability Zone.
698
699
700 --s3-use-glacier
701 Store volumes using Glacier S3 when uploading to Amazon S3. This
702 storage class has a lower cost of storage but a higher per-
703 request cost along with delays of up to 12 hours from the time
704 of retrieval request. This storage cost is calculated against a
705 90-day storage minimum. According to Amazon this storage is
706 ideal for data archiving and long-term backup offering
707 99.999999999% durability. To restore a backup you will have to
708 manually migrate all data stored on AWS Glacier back to Standard
709 S3 and wait for AWS to complete the migration. Notice:
710 Duplicity will store the manifest.gpg files from full and
711 incremental backups on AWS S3 standard storage to allow quick
712 retrieval for later incremental backups, all other data is
713 stored in S3 Glacier.
714
715
716 --s3-use-deep-archive
717 Store volumes using Glacier Deep Archive S3 when uploading to
718 Amazon S3. This storage class has a lower cost of storage but a
719 higher per-request cost along with delays of up to 48 hours from
720 the time of retrieval request. This storage cost is calculated
721 against a 180-day storage minimum. According to Amazon this
722 storage is ideal for data archiving and long-term backup
723 offering 99.999999999% durability. To restore a backup you will
724 have to manually migrate all data stored on AWS Glacier Deep
725 Archive back to Standard S3 and wait for AWS to complete the
726 migration. Notice: Duplicity will store the manifest.gpg files
727 from full and incremental backups on AWS S3 standard storage to
728 allow quick retrieval for later incremental backups, all other
729 data is stored in S3 Glacier Deep Archive.
730
731 Glacier Deep Archive is only available when using the newer
732 boto3 backend.
733
734
735 --s3-use-multiprocessing
736 Allow multipart volumne uploads to S3 through multiprocessing.
737 This option requires Python 2.6 and can be used to make uploads
738 to S3 more efficient. If enabled, files duplicity uploads to S3
739 will be split into chunks and uploaded in parallel. Useful if
740 you want to saturate your bandwidth or if large files are
741 failing during upload.
742
743 This has no effect when using the newer boto3 backend. Boto3
744 always attempts to multiprocessing when it is believed it will
745 be more efficient.
746
747 See also A NOTE ON AMAZON S3 below.
748
749
750 --s3-use-server-side-encryption
751 Allow use of server side encryption in S3
752
753
754 --s3-multipart-chunk-size
755 Chunk size (in MB) used for S3 multipart uploads. Make this
756 smaller than --volsize to maximize the use of your bandwidth.
757 For example, a chunk size of 10MB with a volsize of 30MB will
758 result in 3 chunks per volume upload.
759
760 This has no effect when using the newer boto3 backend.
761
762 See also A NOTE ON AMAZON S3 below.
763
764
765 --s3-multipart-max-procs
766 Specify the maximum number of processes to spawn when performing
767 a multipart upload to S3. By default, this will choose the
768 number of processors detected on your system (e.g. 4 for a
769 4-core system). You can adjust this number as required to ensure
770 you don't overload your system while maximizing the use of your
771 bandwidth.
772
773 This has no effect when using the newer boto3 backend.
774
775 See also A NOTE ON AMAZON S3 below.
776
777
778 --s3-multipart-max-timeout
779 You can control the maximum time (in seconds) a multipart upload
780 can spend on uploading a single chunk to S3. This may be useful
781 if you find your system hanging on multipart uploads or if you'd
782 like to control the time variance when uploading to S3 to ensure
783 you kill connections to slow S3 endpoints.
784
785 This has no effect when using the newer boto3 backend.
786
787 See also A NOTE ON AMAZON S3 below.
788
789
790 --azure-blob-tier
791 Standard storage tier used for backup files (Hot|Cool|Archive).
792
793
794 --azure-max-single-put-size
795 Specify the number of the largest supported upload size where
796 the Azure library makes only one put call. If the content size
797 is known and below this value the Azure library will only
798 perform one put request to upload one block. The number is
799 expected to be in bytes.
800
801
802 --azure-max-block-size
803 Specify the number for the block size used by the Azure library
804 to upload blobs if it is split into multiple blocks. The
805 maximum block size the service supports is 104857600 (100MiB)
806 and the default is 4194304 (4MiB)
807
808
809 --azure-max-connections
810 Specify the number of maximum connections to transfer one blob
811 to Azure blob size exceeds 64MB. The default values is 2.
812
813
814 --scp-command command
815 (only ssh pexpect backend with --use-scp enabled) The command
816 will be used instead of "scp" to send or receive files. To list
817 and delete existing files, the sftp command is used.
818 See also A NOTE ON SSH BACKENDS section SSH pexpect backend.
819
820
821 --sftp-command command
822 (only ssh pexpect backend) The command will be used instead of
823 "sftp".
824 See also A NOTE ON SSH BACKENDS section SSH pexpect backend.
825
826
827 --short-filenames
828 If this option is specified, the names of the files duplicity
829 writes will be shorter (about 30 chars) but less understandable.
830 This may be useful when backing up to MacOS or another OS or FS
831 that doesn't support long filenames.
832
833
834 --sign-key key-id
835 This option can be used when backing up, restoring or verifying.
836 When backing up, all backup files will be signed with keyid key.
837 When restoring, duplicity will signal an error if any remote
838 file is not signed with the given key-id. The key-id can be
839 given in any of the formats supported by GnuPG; see gpg(1),
840 section "HOW TO SPECIFY A USER ID" for details. Should be
841 specified only once because currently only one signing key is
842 supported. Last entry overrides all other entries.
843 See also A NOTE ON SYMMETRIC ENCRYPTION AND SIGNING
844
845
846 --ssh-askpass
847 Tells the ssh backend to prompt the user for the remote system
848 password, if it was not defined in target url and no
849 FTP_PASSWORD env var is set. This password is also used for
850 passphrase-protected ssh keys.
851
852
853 --ssh-options options
854 Allows you to pass options to the ssh backend. Can be specified
855 multiple times or as a space separated options list. The
856 options list should be of the form "-oOpt1='parm1'
857 -oOpt2='parm2'" where the option string is quoted and the only
858 spaces allowed are between options. The option string will be
859 passed verbatim to both scp and sftp, whose command line syntax
860 differs slightly hence the options should therefore be given in
861 the long option format described in ssh_config(5).
862
863 example of a list:
864
865 duplicity --ssh-options="-oProtocol=2
866 -oIdentityFile='/my/backup/id'" /home/me
867 scp://user@host/some_dir
868
869 example with multiple parameters:
870
871 duplicity --ssh-options="-oProtocol=2" --ssh-
872 options="-oIdentityFile='/my/backup/id'" /home/me
873 scp://user@host/some_dir
874
875 NOTE: The ssh paramiko backend currently supports only the -i or
876 -oIdentityFile setting. If needed provide more host specific
877 options via ssh_config file.
878
879
880 --ssl-cacert-file file
881 (only webdav & lftp backend) Provide a cacert file for ssl
882 certificate verification.
883 See also A NOTE ON SSL CERTIFICATE VERIFICATION.
884
885
886 --ssl-cacert-path path/to/certs/
887 (only webdav backend and python 2.7.9+ OR lftp+webdavs and a
888 recent lftp) Provide a path to a folder containing cacert files
889 for ssl certificate verification.
890 See also A NOTE ON SSL CERTIFICATE VERIFICATION.
891
892
893 --ssl-no-check-certificate
894 (only webdav & lftp backend) Disable ssl certificate
895 verification.
896 See also A NOTE ON SSL CERTIFICATE VERIFICATION.
897
898
899 --swift-storage-policy
900 Use this storage policy when operating on Swift containers.
901 See also A NOTE ON SWIFT (OPENSTACK OBJECT STORAGE) ACCESS.
902
903
904 --metadata-sync-mode mode
905 This option defaults to 'partial', but you can set it to 'full'
906 Use 'partial' to avoid syncing metadata for backup chains that
907 you are not going to use. This saves time when restoring for
908 the first time, and lets you restore an old backup that was
909 encrypted with a different passphrase by supplying only the
910 target passphrase.
911 Use 'full' to sync metadata for all backup chains on the remote.
912
913
914 --tempdir directory
915 Use this existing directory for duplicity temporary files
916 instead of the system default, which is usually the /tmp
917 directory. This option supersedes any environment variable.
918 See also ENVIRONMENT VARIABLES.
919
920
921 -ttime, --time time, --restore-time time
922 Specify the time from which to restore or list files.
923
924
925 --time-separator char
926 Use char as the time separator in filenames instead of colon
927 (":").
928
929
930 --timeout seconds
931 Use seconds as the socket timeout value if duplicity begins to
932 timeout during network operations. The default is 30 seconds.
933
934
935 --use-agent
936 If this option is specified, then --use-agent is passed to the
937 GnuPG encryption process and it will try to connect to gpg-agent
938 before it asks for a passphrase for --encrypt-key or --sign-key
939 if needed.
940 Note: Contrary to previous versions of duplicity, this option
941 will also be honored by GnuPG 2 and newer versions. If GnuPG 2
942 is in use, duplicity passes the option --pinentry-mode=loopback
943 to the the gpg process unless --use-agent is specified on the
944 duplicity command line. This has the effect that GnuPG 2 uses
945 the agent only if --use-agent is given, just like GnuPG 1.
946
947
948 --verbosity level, -vlevel
949 Specify output verbosity level (log level). Named levels and
950 corresponding values are 0 Error, 2 Warning, 4 Notice (default),
951 8 Info, 9 Debug (noisiest).
952 level may also be
953 a character: e, w, n, i, d
954 a word: error, warning, notice, info, debug
955
956 The options -v4, -vn and -vnotice are functionally equivalent,
957 as are the mixed/upper-case versions -vN, -vNotice and -vNOTICE.
958
959
960 --version
961 Print duplicity's version and quit.
962
963
964 --volsize number
965 Change the volume size to number MB. Default is 200MB.
966
967
969 TMPDIR, TEMP, TMP
970 In decreasing order of importance, specifies the directory to
971 use for temporary files (inherited from Python's tempfile
972 module). Eventually the option --tempdir supercedes any of
973 these.
974
975 FTP_PASSWORD
976 Supported by most backends which are password capable. More
977 secure than setting it in the backend url (which might be
978 readable in the operating systems process listing to other users
979 on the same machine).
980
981 PASSPHRASE
982 This passphrase is passed to GnuPG. If this is not set, the user
983 will be prompted for the passphrase.
984
985 SIGN_PASSPHRASE
986 The passphrase to be used for --sign-key. If ommitted and sign
987 key is also one of the keys to encrypt against PASSPHRASE will
988 be reused instead. Otherwise, if passphrase is needed but not
989 set the user will be prompted for it.
990
991
993 Duplicity uses the URL format (as standard as possible) to define data
994 locations. The generic format for a URL is:
995
996 scheme://[user[:password]@]host[:port]/[/]path
997
998 It is not recommended to expose the password on the command line since
999 it could be revealed to anyone with permissions to do process listings,
1000 it is permitted however. Consider setting the environment variable
1001 FTP_PASSWORD instead, which is used by most, if not all backends,
1002 regardless of it's name.
1003
1004 In protocols that support it, the path may be preceded by a single
1005 slash, '/path', to represent a relative path to the target home
1006 directory, or preceded by a double slash, '//path', to represent an
1007 absolute filesystem path.
1008
1009 Note:
1010 Scheme (protocol) access may be provided by more than one
1011 backend. In case the default backend is buggy or simply not
1012 working in a specific case it might be worth trying an
1013 alternative implementation. Alternative backends can be
1014 selected by prefixing the scheme with the name of the
1015 alternative backend e.g. ncftp+ftp:// and are mentioned below
1016 the scheme's syntax summary.
1017
1018
1019 Formats of each of the URL schemes follow:
1020
1021
1022 Amazon Drive Backend
1023
1024 ad://some_dir
1025
1026 See also A NOTE ON AMAZON DRIVE
1027
1028 Azure
1029
1030 azure://container-name
1031
1032 See also A NOTE ON AZURE ACCESS
1033
1034 B2
1035
1036 b2://account_id[:application_key]@bucket_name/[folder/]
1037
1038 Cloud Files (Rackspace)
1039
1040 cf+http://container_name
1041
1042 See also A NOTE ON CLOUD FILES ACCESS
1043
1044 Dropbox
1045
1046 dpbx:///some_dir
1047
1048 Make sure to read A NOTE ON DROPBOX ACCESS first!
1049
1050 Local file path
1051
1052 file://[relative|/absolute]/local/path
1053
1054 FISH (Files transferred over Shell protocol) over ssh
1055
1056 fish://user[:password]@other.host[:port]/[relative|/absolute]_path
1057
1058 FTP
1059
1060 ftp[s]://user[:password]@other.host[:port]/some_dir
1061
1062 NOTE: use lftp+, ncftp+ prefixes to enforce a specific backend,
1063 default is lftp+ftp://...
1064
1065 Google Docs
1066
1067 gdocs://user[:password]@other.host/some_dir
1068
1069 NOTE: use pydrive+, gdata+ prefixes to enforce a specific
1070 backend, default is pydrive+gdocs://...
1071
1072 Google Cloud Storage
1073
1074 gs://bucket[/prefix]
1075
1076 HSI
1077
1078 hsi://user[:password]@other.host/some_dir
1079
1080 hubiC
1081
1082 cf+hubic://container_name
1083
1084 See also A NOTE ON HUBIC
1085
1086 IMAP email storage
1087
1088 imap[s]://user[:password]@host.com[/from_address_prefix]
1089
1090 See also A NOTE ON IMAP
1091
1092 MEGA.nz cloud storage (only works for accounts created prior to
1093 November 2018, uses "megatools")
1094
1095 mega://user[:password]@mega.nz/some_dir
1096
1097 NOTE: if not given in the URL, relies on password being stored
1098 within $HOME/.megarc (as used by the "megatools" utilities)
1099
1100 MEGA.nz cloud storage (works for all MEGA accounts, uses "MEGAcmd"
1101 tools)
1102
1103 megav2://user[:password]@mega.nz/some_dir
1104
1105 NOTE: despite "MEGAcmd" no longer uses a configuration file, for
1106 convenience storing the user password this backend searches it
1107 in the $HOME/.megav2rc file (same syntax as the old
1108 $HOME/.megarc)
1109 [Login]
1110 Username = MEGA_USERNAME
1111 Password = MEGA_PASSWORD
1112
1113 OneDrive Backend
1114
1115 onedrive://some_dir
1116
1117 Par2 Wrapper Backend
1118
1119 par2+scheme://[user[:password]@]host[:port]/[/]path
1120
1121 See also A NOTE ON PAR2 WRAPPER BACKEND
1122
1123 Rclone Backend
1124
1125 rclone://remote:/some_dir
1126
1127 See also A NOTE ON RCLONE BACKEND
1128
1129 Rsync via daemon
1130
1131 rsync://user[:password]@host.com[:port]::[/]module/some_dir
1132
1133 Rsync over ssh (only key auth)
1134
1135 rsync://user@host.com[:port]/[relative|/absolute]_path
1136
1137 S3 storage (Amazon)
1138
1139 s3://host[:port]/bucket_name[/prefix]
1140 s3+http://bucket_name[/prefix]
1141 defaults to the legacy boto backend based on boto v2 (last
1142 update 2018/07)
1143 alternatively try the newer boto3+s3://bucket_name[/prefix]
1144
1145 For details see A NOTE ON AMAZON S3 and see also A NOTE ON
1146 EUROPEAN S3 BUCKETS below.
1147
1148 SCP/SFTP access
1149
1150 scp://.. or
1151 sftp://user[:password]@other.host[:port]/[relative|/absolute]_path
1152
1153 defaults are paramiko+scp:// and paramiko+sftp://
1154 alternatively try pexpect+scp://, pexpect+sftp://, lftp+sftp://
1155 See also --ssh-askpass, --ssh-options and A NOTE ON SSH
1156 BACKENDS.
1157
1158 Swift (Openstack)
1159
1160 swift://container_name[/prefix]
1161
1162 See also A NOTE ON SWIFT (OPENSTACK OBJECT STORAGE) ACCESS
1163
1164 Public Cloud Archive (OVH)
1165
1166 pca://container_name[/prefix]
1167
1168 See also A NOTE ON PCA ACCESS
1169
1170 Tahoe-LAFS
1171
1172 tahoe://alias/directory
1173
1174 WebDAV
1175
1176 webdav[s]://user[:password]@other.host[:port]/some_dir
1177
1178 alternatively try lftp+webdav[s]://
1179
1180 pydrive
1181
1182 pydrive://<service account' email
1183 address>@developer.gserviceaccount.com/some_dir
1184
1185 See also A NOTE ON PYDRIVE BACKEND below.
1186
1187 multi
1188
1189 multi:///path/to/config.json
1190
1191 See also A NOTE ON MULTI BACKEND below.
1192
1193 MediaFire
1194
1195 mf://user[:password]@mediafire.com/some_dir
1196
1197 See also A NOTE ON MEDIAFIRE BACKEND below.
1198
1199
1201 duplicity uses time strings in two places. Firstly, many of the files
1202 duplicity creates will have the time in their filenames in the w3
1203 datetime format as described in a w3 note at http://www.w3.org/TR/NOTE-
1204 datetime. Basically they look like "2001-07-15T04:09:38-07:00", which
1205 means what it looks like. The "-07:00" section means the time zone is
1206 7 hours behind UTC.
1207
1208 Secondly, the -t, --time, and --restore-time options take a time
1209 string, which can be given in any of several formats:
1210
1211 1. the string "now" (refers to the current time)
1212
1213 2. a sequences of digits, like "123456890" (indicating the time in
1214 seconds after the epoch)
1215
1216 3. A string like "2002-01-25T07:00:00+02:00" in datetime format
1217
1218 4. An interval, which is a number followed by one of the characters
1219 s, m, h, D, W, M, or Y (indicating seconds, minutes, hours,
1220 days, weeks, months, or years respectively), or a series of such
1221 pairs. In this case the string refers to the time that preceded
1222 the current time by the length of the interval. For instance,
1223 "1h78m" indicates the time that was one hour and 78 minutes ago.
1224 The calendar here is unsophisticated: a month is always 30 days,
1225 a year is always 365 days, and a day is always 86400 seconds.
1226
1227 5. A date format of the form YYYY/MM/DD, YYYY-MM-DD, MM/DD/YYYY, or
1228 MM-DD-YYYY, which indicates midnight on the day in question,
1229 relative to the current time zone settings. For instance,
1230 "2002/3/5", "03-05-2002", and "2002-3-05" all mean March 5th,
1231 2002.
1232
1233
1235 When duplicity is run, it searches through the given source directory
1236 and backs up all the files specified by the file selection system. The
1237 file selection system comprises a number of file selection conditions,
1238 which are set using one of the following command line options:
1239 --exclude
1240 --exclude-device-files
1241 --exclude-if-present
1242 --exclude-filelist
1243 --exclude-regexp
1244 --include
1245 --include-filelist
1246 --include-regexp
1247 Each file selection condition either matches or doesn't match a given
1248 file. A given file is excluded by the file selection system exactly
1249 when the first matching file selection condition specifies that the
1250 file be excluded; otherwise the file is included.
1251
1252 For instance,
1253
1254 duplicity --include /usr --exclude /usr /usr
1255 scp://user@host/backup
1256
1257 is exactly the same as
1258
1259 duplicity /usr scp://user@host/backup
1260
1261 because the include and exclude directives match exactly the same
1262 files, and the --include comes first, giving it precedence. Similarly,
1263
1264 duplicity --include /usr/local/bin --exclude /usr/local /usr
1265 scp://user@host/backup
1266
1267 would backup the /usr/local/bin directory (and its contents), but not
1268 /usr/local/doc.
1269
1270 The include, exclude, include-filelist, and exclude-filelist options
1271 accept some extended shell globbing patterns. These patterns can
1272 contain *, **, ?, and [...] (character ranges). As in a normal shell,
1273 * can be expanded to any string of characters not containing "/", ?
1274 expands to any character except "/", and [...] expands to a single
1275 character of those characters specified (ranges are acceptable). The
1276 new special pattern, **, expands to any string of characters whether or
1277 not it contains "/". Furthermore, if the pattern starts with
1278 "ignorecase:" (case insensitive), then this prefix will be removed and
1279 any character in the string can be replaced with an upper- or lowercase
1280 version of itself.
1281
1282 Remember that you may need to quote these characters when typing them
1283 into a shell, so the shell does not interpret the globbing patterns
1284 before duplicity sees them.
1285
1286 The --exclude pattern option matches a file if:
1287
1288 1. pattern can be expanded into the file's filename, or
1289 2. the file is inside a directory matched by the option.
1290
1291 Conversely, the --include pattern matches a file if:
1292
1293 1. pattern can be expanded into the file's filename, or
1294 2. the file is inside a directory matched by the option, or
1295 3. the file is a directory which contains a file matched by the
1296 option.
1297
1298 For example,
1299
1300 --exclude /usr/local
1301
1302 matches e.g. /usr/local, /usr/local/lib, and /usr/local/lib/netscape.
1303 It is the same as --exclude /usr/local --exclude '/usr/local/**'.
1304
1305 On the other hand
1306
1307 --include /usr/local
1308
1309 specifies that /usr, /usr/local, /usr/local/lib, and
1310 /usr/local/lib/netscape (but not /usr/doc) all be backed up. Thus you
1311 don't have to worry about including parent directories to make sure
1312 that included subdirectories have somewhere to go.
1313
1314 Finally,
1315
1316 --include ignorecase:'/usr/[a-z0-9]foo/*/**.py'
1317
1318 would match a file like /usR/5fOO/hello/there/world.py. If it did
1319 match anything, it would also match /usr. If there is no existing file
1320 that the given pattern can be expanded into, the option will not match
1321 /usr alone.
1322
1323 The --include-filelist, and --exclude-filelist, options also introduce
1324 file selection conditions. They direct duplicity to read in a text
1325 file (either ASCII or UTF-8), each line of which is a file
1326 specification, and to include or exclude the matching files. Lines are
1327 separated by newlines or nulls, depending on whether the --null-
1328 separator switch was given. Each line in the filelist will be
1329 interpreted as a globbing pattern the way --include and --exclude
1330 options are interpreted, except that lines starting with "+ " are
1331 interpreted as include directives, even if found in a filelist
1332 referenced by --exclude-filelist. Similarly, lines starting with "- "
1333 exclude files even if they are found within an include filelist.
1334
1335 For example, if file "list.txt" contains the lines:
1336
1337 /usr/local
1338 - /usr/local/doc
1339 /usr/local/bin
1340 + /var
1341 - /var
1342
1343 then --include-filelist list.txt would include /usr, /usr/local, and
1344 /usr/local/bin. It would exclude /usr/local/doc,
1345 /usr/local/doc/python, etc. It would also include /usr/local/man, as
1346 this is included within /user/local. Finally, it is undefined what
1347 happens with /var. A single file list should not contain conflicting
1348 file specifications.
1349
1350 Each line in the filelist will also be interpreted as a globbing
1351 pattern the way --include and --exclude options are interpreted. For
1352 instance, if the file "list.txt" contains the lines:
1353
1354 dir/foo
1355 + dir/bar
1356 - **
1357
1358 Then --include-filelist list.txt would be exactly the same as
1359 specifying --include dir/foo --include dir/bar --exclude ** on the
1360 command line.
1361
1362 Finally, the --include-regexp and --exclude-regexp options allow files
1363 to be included and excluded if their filenames match a python regular
1364 expression. Regular expression syntax is too complicated to explain
1365 here, but is covered in Python's library reference. Unlike the
1366 --include and --exclude options, the regular expression options don't
1367 match files containing or contained in matched files. So for instance
1368
1369 --include '[0-9]{7}(?!foo)'
1370
1371 matches any files whose full pathnames contain 7 consecutive digits
1372 which aren't followed by 'foo'. However, it wouldn't match /home even
1373 if /home/ben/1234567 existed.
1374
1375
1377 1. The API Keys used for Amazon Drive have not been granted
1378 production limits. Amazon do not say what the development
1379 limits are and are not replying to to requests to whitelist
1380 duplicity. A related tool, acd_cli, was demoted to development
1381 limits, but continues to work fine except for cases of excessive
1382 usage. If you experience throttling and similar issues with
1383 Amazon Drive using this backend, please report them to the
1384 mailing list.
1385
1386 2. If you previously used the acd+acdcli backend, it is strongly
1387 recommended to update to the ad backend instead, since it
1388 interfaces directly with Amazon Drive. You will need to setup
1389 the OAuth once again, but can otherwise keep your backups and
1390 config.
1391
1392
1394 When backing up to Amazon S3, two backend implementations are
1395 available. The schemes "s3" and "s3+http" are implemented using the
1396 older boto library, which has been deprecated and is no longer
1397 supported. The "boto3+s3" scheme is based on the newer boto3 library.
1398 This new backend fixes several known limitations in the older backend,
1399 which have crept in as Amazon S3 has evolved while the deprecated boto
1400 library has not kept up.
1401
1402 The boto3 backend should behave largely the same as the older S3
1403 backend, but there are some differences in the handling of some of the
1404 "S3" options. Additionally, there are some compatibility differences
1405 with the new backed. Because of these reasons, both backends have been
1406 retained for the time being. See the documentation for specific
1407 options regarding differences related to each backend.
1408
1409 The boto3 backend does not support bucket creation. This is a
1410 deliberate choice which simplifies the code, and side steps problems
1411 related to region selection. Additionally, it is probably not a good
1412 practice to give your backup role bucket creation rights. In most
1413 cases the role used for backups should probably be limited to specific
1414 buckets.
1415
1416 The boto3 backend only supports newer domain style buckets. Amazon is
1417 moving to deprecate the older bucket style, so migration is
1418 recommended. Use the older s3 backend for compatibility with backups
1419 stored in buckets using older naming conventions.
1420
1421 The boto3 backend does not currently support initiating restores from
1422 the glacier storage class. When restoring a backup from glacier or
1423 glacier deep archive, the backup files must first be restored out of
1424 band. There are multiple options when restoring backups from cold
1425 storage, which vary in both cost and speed. See Amazon's documentation
1426 for details.
1427
1428
1430 The Azure backend requires the Microsoft Azure Storage SDK for Python
1431 to be installed on the system. See REQUIREMENTS above.
1432
1433 It uses environment variables for authentification: AZURE_ACCOUNT_NAME
1434 (required), AZURE_ACCOUNT_KEY (optional), AZURE_SHARED_ACCESS_SIGNATURE
1435 (optional). One of AZURE_ACCOUNT_KEY or AZURE_SHARED_ACCESS_SIGNATURE
1436 is required.
1437
1438 A container name must be a valid DNS name, conforming to the following
1439 naming rules:
1440
1441
1442 1. Container names must start with a letter or number, and
1443 can contain only letters, numbers, and the dash (-)
1444 character.
1445
1446 2. Every dash (-) character must be immediately preceded and
1447 followed by a letter or number; consecutive dashes are
1448 not permitted in container names.
1449
1450 3. All letters in a container name must be lowercase.
1451
1452 4. Container names must be from 3 through 63 characters
1453 long.
1454
1455
1457 Pyrax is Rackspace's next-generation Cloud management API, including
1458 Cloud Files access. The cfpyrax backend requires the pyrax library to
1459 be installed on the system. See REQUIREMENTS above.
1460
1461 Cloudfiles is Rackspace's now deprecated implementation of OpenStack
1462 Object Storage protocol. Users wishing to use Duplicity with Rackspace
1463 Cloud Files should migrate to the new Pyrax plugin to ensure support.
1464
1465 The backend requires python-cloudfiles to be installed on the system.
1466 See REQUIREMENTS above.
1467
1468 It uses three environment variables for authentification:
1469 CLOUDFILES_USERNAME (required), CLOUDFILES_APIKEY (required),
1470 CLOUDFILES_AUTHURL (optional)
1471
1472 If CLOUDFILES_AUTHURL is unspecified it will default to the value
1473 provided by python-cloudfiles, which points to rackspace, hence this
1474 value must be set in order to use other cloud files providers.
1475
1476
1478 1. First of all Dropbox backend requires valid authentication
1479 token. It should be passed via DPBX_ACCESS_TOKEN environment
1480 variable.
1481 To obtain it please create 'Dropbox API' application at:
1482 https://www.dropbox.com/developers/apps/create
1483 Then visit app settings and just use 'Generated access token'
1484 under OAuth2 section.
1485 Alternatively you can let duplicity generate access token
1486 itself. In such case temporary export DPBX_APP_KEY ,
1487 DPBX_APP_SECRET using values from app settings page and run
1488 duplicity interactively.
1489 It will print the URL that you need to open in the browser to
1490 obtain OAuth2 token for the application. Just follow on-screen
1491 instructions and then put generated token to DPBX_ACCESS_TOKEN
1492 variable. Once done, feel free to unset DPBX_APP_KEY and
1493 DPBX_APP_SECRET
1494
1495
1496 2. "some_dir" must already exist in the Dropbox folder. Depending
1497 on access token kind it may be:
1498 Full Dropbox: path is absolute and starts from 'Dropbox'
1499 root folder.
1500 App Folder: path is related to application folder.
1501 Dropbox client will show it in ~/Dropbox/Apps/<app-name>
1502
1503
1504 3. When using Dropbox for storage, be aware that all files,
1505 including the ones in the Apps folder, will be synced to all
1506 connected computers. You may prefer to use a separate Dropbox
1507 account specially for the backups, and not connect any computers
1508 to that account. Alternatively you can configure selective sync
1509 on all computers to avoid syncing of backup files
1510
1511
1513 Amazon S3 provides the ability to choose the location of a bucket upon
1514 its creation. The purpose is to enable the user to choose a location
1515 which is better located network topologically relative to the user,
1516 because it may allow for faster data transfers.
1517
1518 duplicity will create a new bucket the first time a bucket access is
1519 attempted. At this point, the bucket will be created in Europe if
1520 --s3-european-buckets was given. For reasons having to do with how the
1521 Amazon S3 service works, this also requires the use of the --s3-use-
1522 new-style option. This option turns on subdomain based bucket
1523 addressing in S3. The details are beyond the scope of this man page,
1524 but it is important to know that your bucket must not contain upper
1525 case letters or any other characters that are not valid parts of a
1526 hostname. Consequently, for reasons of backwards compatibility, use of
1527 subdomain based bucket addressing is not enabled by default.
1528
1529 Note that you will need to use --s3-use-new-style for all operations on
1530 European buckets; not just upon initial creation.
1531
1532 You only need to use --s3-european-buckets upon initial creation, but
1533 you may may use it at all times for consistency.
1534
1535 Further note that when creating a new European bucket, it can take a
1536 while before the bucket is fully accessible. At the time of this
1537 writing it is unclear to what extent this is an expected feature of
1538 Amazon S3, but in practice you may experience timeouts, socket errors
1539 or HTTP errors when trying to upload files to your newly created
1540 bucket. Give it a few minutes and the bucket should function normally.
1541
1542
1544 Filename prefixes can be used in multi backend with mirror mode to
1545 define affinity rules. They can also be used in conjunction with S3
1546 lifecycle rules to transition archive files to Glacier, while keeping
1547 metadata (signature and manifest files) on S3.
1548
1549 Duplicity does not require access to archive files except when
1550 restoring from backup.
1551
1552
1554 Support for Google Cloud Storage relies on its Interoperable Access,
1555 which must be enabled for your account. Once enabled, you can generate
1556 Interoperable Storage Access Keys and pass them to duplicity via the
1557 GS_ACCESS_KEY_ID and GS_SECRET_ACCESS_KEY environment variables.
1558 Alternatively, you can run gsutil config -a to have the Google Cloud
1559 Storage utility populate the ~/.boto configuration file.
1560
1561 Enable Interoperable Access:
1562 https://code.google.com/apis/console#:storage
1563 Create Access Keys:
1564 https://code.google.com/apis/console#:storage:legacy
1565
1566
1568 The hubic backend requires the pyrax library to be installed on the
1569 system. See REQUIREMENTS above. You will need to set your credentials
1570 for hubiC in a file called ~/.hubic_credentials, following this
1571 pattern:
1572
1573 [hubic]
1574 email = your_email
1575 password = your_password
1576 client_id = api_client_id
1577 client_secret = api_secret_key
1578 redirect_uri = http://localhost/
1579
1580
1582 An IMAP account can be used as a target for the upload. The userid may
1583 be specified and the password will be requested.
1584
1585 The from_address_prefix may be specified (and probably should be). The
1586 text will be used as the "From" address in the IMAP server. Then on a
1587 restore (or list) command the from_address_prefix will distinguish
1588 between different backups.
1589
1590
1592 The multi backend allows duplicity to combine the storage available in
1593 more than one backend store (e.g., you can store across a google drive
1594 account and a onedrive account to get effectively the combined storage
1595 available in both). The URL path specifies a JSON formated config file
1596 containing a list of the backends it will use. The URL may also specify
1597 "query" parameters to configure overall behavior. Each element of the
1598 list must have a "url" element, and may also contain an optional
1599 "description" and an optional "env" list of environment variables used
1600 to configure that backend.
1601
1602 Query Parameters
1603 Query parameters come after the file URL in standard HTTP format for
1604 example:
1605 multi:///path/to/config.json?mode=mirror&onfail=abort
1606 multi:///path/to/config.json?mode=stripe&onfail=continue
1607 multi:///path/to/config.json?onfail=abort&mode=stripe
1608 multi:///path/to/config.json?onfail=abort
1609 Order does not matter, however unrecognized parameters are considered
1610 an error.
1611
1612 mode=stripe
1613 This mode (the default) performs round-robin access to the list
1614 of backends. In this mode, all backends must be reliable as a
1615 loss of one means a loss of one of the archive files.
1616
1617 mode=mirror
1618 This mode accesses backends as a RAID1-store, storing every file
1619 in every backend and reading files from the first-successful
1620 backend. A loss of any backend should result in no failure.
1621 Note that backends added later will only get new files and may
1622 require a manual sync with one of the other operating ones.
1623
1624 onfail=continue
1625 This setting (the default) continues all write operations in as
1626 best-effort. Any failure results in the next backend tried.
1627 Failure is reported only when all backends fail a given
1628 operation with the error result from the last failure.
1629
1630 onfail=abort
1631 This setting considers any backend write failure as a
1632 terminating condition and reports the error. Data reading and
1633 listing operations are independent of this and will try with the
1634 next backend on failure.
1635
1636 JSON File Example
1637 [
1638 {
1639 "description": "a comment about the backend"
1640 "url": "abackend://myuser@domain.com/backup",
1641 "env": [
1642 {
1643 "name" : "MYENV",
1644 "value" : "xyz"
1645 },
1646 {
1647 "name" : "FOO",
1648 "value" : "bar"
1649 }
1650 ],
1651 "prefixes": ["prefix1_", "prefix2_"]
1652 },
1653 {
1654 "url": "file:///path/to/dir"
1655 }
1656 ]
1657
1658
1660 Par2 Wrapper Backend can be used in combination with all other backends
1661 to create recovery files. Just add par2+ before a regular scheme (e.g.
1662 par2+ftp://user@host/dir or par2+s3+http://bucket_name ). This will
1663 create par2 recovery files for each archive and upload them all to the
1664 wrapped backend.
1665
1666 Before restoring, archives will be verified. Corrupt archives will be
1667 repaired on the fly if there are enough recovery blocks available.
1668
1669 Use --par2-redundancy percent to adjust the size (and redundancy) of
1670 recovery files in percent.
1671
1672
1674 The pydrive backend requires Python PyDrive package to be installed on
1675 the system. See REQUIREMENTS above.
1676
1677 There are two ways to use PyDrive: with a regular account or with a
1678 "service account". With a service account, a separate account is
1679 created, that is only accessible with Google APIs and not a web login.
1680 With a regular account, you can store backups in your normal Google
1681 Drive.
1682
1683 To use a service account, go to the Google developers console at
1684 https://console.developers.google.com. Create a project, and make sure
1685 Drive API is enabled for the project. Under "APIs and auth", click
1686 Create New Client ID, then select Service Account with P12 key.
1687
1688 Download the .p12 key file of the account and convert it to the .pem
1689 format:
1690 openssl pkcs12 -in XXX.p12 -nodes -nocerts > pydriveprivatekey.pem
1691
1692 The content of .pem file should be passed to GOOGLE_DRIVE_ACCOUNT_KEY
1693 environment variable for authentification.
1694
1695 The email address of the account will be used as part of URL. See URL
1696 FORMAT above.
1697
1698 The alternative is to use a regular account. To do this, start as
1699 above, but when creating a new Client ID, select "Installed
1700 application" of type "Other". Create a file with the following content,
1701 and pass its filename in the GOOGLE_DRIVE_SETTINGS environment
1702 variable:
1703
1704 client_config_backend: settings
1705 client_config:
1706 client_id: <Client ID from developers' console>
1707 client_secret: <Client secret from developers' console>
1708 save_credentials: True
1709 save_credentials_backend: file
1710 save_credentials_file: <filename to cache credentials>
1711 get_refresh_token: True
1712
1713 In this scenario, the username and host parts of the URL play no role;
1714 only the path matters. During the first run, you will be prompted to
1715 visit an URL in your browser to grant access to your drive. Once
1716 granted, you will receive a verification code to paste back into
1717 Duplicity. The credentials are then cached in the file references above
1718 for future use.
1719
1720
1722 Rclone is a powerful command line program to sync files and directories
1723 to and from various cloud storage providers.
1724
1725 Once you have configured an rclone remote via
1726
1727 rclone config
1728
1729 and successfully set up a remote (e.g. gdrive for Google Drive),
1730 assuming you can list your remote files with
1731
1732 rclone ls gdrive:mydocuments
1733
1734 you can start your backup with
1735
1736 duplicity /mydocuments rclone://gdrive:/mydocuments
1737
1738 Please note the slash after the second colon. Some storage provider
1739 will work with or without slash after colon, but some other will not.
1740 Since duplicity will complain about malformed URL if a slash is not
1741 present, always put it after the colon, and the backend will handle it
1742 for you.
1743
1744
1746 The ssh backends support sftp and scp/ssh transport protocols. This is
1747 a known user-confusing issue as these are fundamentally different. If
1748 you plan to access your backend via one of those please inform yourself
1749 about the requirements for a server to support sftp or scp/ssh access.
1750 To make it even more confusing the user can choose between several ssh
1751 backends via a scheme prefix: paramiko+ (default), pexpect+, lftp+... .
1752 paramiko & pexpect support --use-scp, --ssh-askpass and --ssh-options.
1753 Only the pexpect backend allows to define --scp-command and --sftp-
1754 command.
1755
1756 SSH paramiko backend (default) is a complete reimplementation of ssh
1757 protocols natively in python. Advantages are speed and maintainability.
1758 Minor disadvantage is that extra packages are needed as listed in
1759 REQUIREMENTS above. In sftp (default) mode all operations are done via
1760 the according sftp commands. In scp mode ( --use-scp ) though scp
1761 access is used for put/get operations but listing is done via ssh
1762 remote shell.
1763
1764 SSH pexpect backend is the legacy ssh backend using the command line
1765 ssh binaries via pexpect. Older versions used scp for get and put
1766 operations and sftp for list and delete operations. The current
1767 version uses sftp for all four supported operations, unless the --use-
1768 scp option is used to revert to old behavior.
1769
1770 SSH lftp backend is simply there because lftp can interact with the ssh
1771 cmd line binaries. It is meant as a last resort in case the above
1772 options fail for some reason.
1773
1774 Why use sftp instead of scp? The change to sftp was made in order to
1775 allow the remote system to chroot the backup, thus providing better
1776 security and because it does not suffer from shell quoting issues like
1777 scp. Scp also does not support any kind of file listing, so sftp or
1778 ssh access will always be needed in addition for this backend mode to
1779 work properly. Sftp does not have these limitations but needs an sftp
1780 service running on the backend server, which is sometimes not an
1781 option.
1782
1783
1785 Certificate verification as implemented right now [02.2016] only in the
1786 webdav and lftp backends. older pythons 2.7.8- and older lftp binaries
1787 need a file based database of certification authority certificates
1788 (cacert file).
1789 Newer python 2.7.9+ and recent lftp versions however support the system
1790 default certificates (usually in /etc/ssl/certs) and also giving an
1791 alternative ca cert folder via --ssl-cacert-path.
1792
1793 The cacert file has to be a PEM formatted text file as currently
1794 provided by the CURL project. See
1795
1796 http://curl.haxx.se/docs/caextract.html
1797
1798 After creating/retrieving a valid cacert file you should copy it to
1799 either
1800
1801 ~/.duplicity/cacert.pem
1802 ~/duplicity_cacert.pem
1803 /etc/duplicity/cacert.pem
1804
1805 Duplicity searches it there in the same order and will fail if it can't
1806 find it. You can however specify the option --ssl-cacert-file <file>
1807 to point duplicity to a copy in a different location.
1808
1809 Finally there is the --ssl-no-check-certificate option to disable
1810 certificate verification alltogether, in case some ssl library is
1811 missing or verification is not wanted. Use it with care, as even with
1812 self signed servers manually providing the private ca certificate is
1813 definitely the safer option.
1814
1815
1817 Swift is the OpenStack Object Storage service.
1818 The backend requires python-switclient to be installed on the system.
1819 python-keystoneclient is also needed to use OpenStack's Keystone
1820 Identity service. See REQUIREMENTS above.
1821
1822 It uses following environment variables for authentification:
1823 SWIFT_USERNAME (required), SWIFT_PASSWORD (required), SWIFT_AUTHURL
1824 (required), SWIFT_USERID (required, only for IBM Bluemix
1825 ObjectStorage), SWIFT_TENANTID (required, only for IBM Bluemix
1826 ObjectStorage), SWIFT_REGIONNAME (required, only for IBM Bluemix
1827 ObjectStorage), SWIFT_TENANTNAME (optional, the tenant can be included
1828 in the username)
1829
1830 If the user was previously authenticated, the following environment
1831 variables can be used instead: SWIFT_PREAUTHURL (required),
1832 SWIFT_PREAUTHTOKEN (required)
1833
1834 If SWIFT_AUTHVERSION is unspecified, it will default to version 1.
1835
1836
1838 PCA is a long-term data archival solution by OVH. It runs a slightly
1839 modified version of Openstack Swift introducing latency in the data
1840 retrieval process. It is a good pick for a multi backend configuration
1841 where receiving volumes while an other backend is used to store
1842 manifests and signatures.
1843
1844 The backend requires python-switclient to be installed on the system.
1845 python-keystoneclient is also needed to interact with OpenStack's
1846 Keystone Identity service. See REQUIREMENTS above.
1847
1848 It uses following environment variables for authentification:
1849 PCA_USERNAME (required), PCA_PASSWORD (required), PCA_AUTHURL
1850 (required), PCA_USERID (optional), PCA_TENANTID (optional, but either
1851 the tenant name or tenant id must be supplied) PCA_REGIONNAME
1852 (optional), PCA_TENANTNAME (optional, but either the tenant name or
1853 tenant id must be supplied)
1854
1855 If the user was previously authenticated, the following environment
1856 variables can be used instead: PCA_PREAUTHURL (required),
1857 PCA_PREAUTHTOKEN (required)
1858
1859 If PCA_AUTHVERSION is unspecified, it will default to version 2.
1860
1861
1863 This backend requires mediafire python library to be installed on the
1864 system. See REQUIREMENTS.
1865
1866 Use URL escaping for username (and password, if provided via command
1867 line):
1868
1869
1870 mf://duplicity%40example.com@mediafire.com/some_folder
1871
1872 The destination folder will be created for you if it does not exist.
1873
1874
1876 Signing and symmetrically encrypt at the same time with the gpg binary
1877 on the command line, as used within duplicity, is a specifically
1878 challenging issue. Tests showed that the following combinations proved
1879 working.
1880
1881 1. Setup gpg-agent properly. Use the option --use-agent and enter both
1882 passphrases (symmetric and sign key) in the gpg-agent's dialog.
1883
1884 2. Use a PASSPHRASE for symmetric encryption of your choice but the
1885 signing key has an empty passphrase.
1886
1887 3. The used PASSPHRASE for symmetric encryption and the passphrase of
1888 the signing key are identical.
1889
1890
1892 Hard links currently unsupported (they will be treated as non-linked
1893 regular files).
1894
1895 Bad signatures will be treated as empty instead of logging appropriate
1896 error message.
1897
1898
1900 This section describes duplicity's basic operation and the format of
1901 its data files. It should not necessary to read this section to use
1902 duplicity.
1903
1904 The files used by duplicity to store backup data are tarfiles in GNU
1905 tar format. They can be produced independently by rdiffdir(1). For
1906 incremental backups, new files are saved normally in the tarfile. But
1907 when a file changes, instead of storing a complete copy of the file,
1908 only a diff is stored, as generated by rdiff(1). If a file is deleted,
1909 a 0 length file is stored in the tar. It is possible to restore a
1910 duplicity archive "manually" by using tar and then cp, rdiff, and rm as
1911 necessary. These duplicity archives have the extension difftar.
1912
1913 Both full and incremental backup sets have the same format. In effect,
1914 a full backup set is an incremental one generated from an empty
1915 signature (see below). The files in full backup sets will start with
1916 duplicity-full while the incremental sets start with duplicity-inc.
1917 When restoring, duplicity applies patches in order, so deleting, for
1918 instance, a full backup set may make related incremental backup sets
1919 unusable.
1920
1921 In order to determine which files have been deleted, and to calculate
1922 diffs for changed files, duplicity needs to process information about
1923 previous sessions. It stores this information in the form of tarfiles
1924 where each entry's data contains the signature (as produced by rdiff)
1925 of the file instead of the file's contents. These signature sets have
1926 the extension sigtar.
1927
1928 Signature files are not required to restore a backup set, but without
1929 an up-to-date signature, duplicity cannot append an incremental backup
1930 to an existing archive.
1931
1932 To save bandwidth, duplicity generates full signature sets and
1933 incremental signature sets. A full signature set is generated for each
1934 full backup, and an incremental one for each incremental backup. These
1935 start with duplicity-full-signatures and duplicity-new-signatures
1936 respectively. These signatures will be stored both locally and
1937 remotely. The remote signatures will be encrypted if encryption is
1938 enabled. The local signatures will not be encrypted and stored in the
1939 archive dir (see --archive-dir ).
1940
1941
1943 Duplicity requires a POSIX-like operating system with a python
1944 interpreter version 2.6+ installed. It is best used under GNU/Linux.
1945
1946 Some backends also require additional components (probably available as
1947 packages for your specific platform):
1948
1949 Amazon Drive backend
1950 python-requests - http://python-requests.org
1951 python-requests-oauthlib - https://github.com/requests/requests-
1952 oauthlib
1953
1954 azure backend (Azure Blob Storage Service)
1955 Microsoft Azure Storage SDK for Python -
1956 https://pypi.python.org/pypi/azure-storage/
1957
1958 boto backend (S3 Amazon Web Services, Google Cloud Storage)
1959 boto version 2.0+ - http://github.com/boto/boto
1960
1961 cfpyrax backend (Rackspace Cloud) and hubic backend (hubic.com)
1962 Rackspace CloudFiles Pyrax API -
1963 http://docs.rackspace.com/sdks/guide/content/python.html
1964
1965 dpbx backend (Dropbox)
1966 Dropbox Python SDK -
1967 https://www.dropbox.com/developers/reference/sdk
1968
1969 gdocs gdata backend (legacy Google Docs backend)
1970 Google Data APIs Python Client Library -
1971 http://code.google.com/p/gdata-python-client/
1972
1973 gdocs pydrive backend(default)
1974 see pydrive backend
1975
1976 gio backend (Gnome VFS API)
1977 PyGObject - http://live.gnome.org/PyGObject
1978 D-Bus (dbus)- http://www.freedesktop.org/wiki/Software/dbus
1979
1980 lftp backend (needed for ftp, ftps, fish [over ssh] - also supports
1981 sftp, webdav[s])
1982 LFTP Client - http://lftp.yar.ru/
1983
1984 MEGA backend (only works for accounts created prior to November 2018)
1985 (mega.nz)
1986 megatools client - https://github.com/megous/megatools
1987
1988 MEGA v2 backend (works for all MEGA accounts) (mega.nz)
1989 MEGAcmd client - https://mega.nz/cmd
1990
1991 multi backend
1992 Multi -- store to more than one backend
1993 (also see A NOTE ON MULTI BACKEND ) below.
1994
1995 ncftp backend (ftp, select via ncftp+ftp://)
1996 NcFTP - http://www.ncftp.com/
1997
1998 OneDrive backend (Microsoft OneDrive)
1999 python-requests-oauthlib - https://github.com/requests/requests-
2000 oauthlib
2001
2002 Par2 Wrapper Backend
2003 par2cmdline - http://parchive.sourceforge.net/
2004
2005 pydrive backend
2006 PyDrive -- a wrapper library of google-api-python-client -
2007 https://pypi.python.org/pypi/PyDrive
2008 (also see A NOTE ON PYDRIVE BACKEND ) below.
2009
2010 rclone backend
2011 rclone - https://rclone.org/
2012
2013 rsync backend
2014 rsync client binary - http://rsync.samba.org/
2015
2016 ssh paramiko backend (default)
2017 paramiko (SSH2 for python) -
2018 http://pypi.python.org/pypi/paramiko (downloads);
2019 http://github.com/paramiko/paramiko (project page)
2020 pycrypto (Python Cryptography Toolkit) -
2021 http://www.dlitz.net/software/pycrypto/
2022
2023 ssh pexpect backend
2024 sftp/scp client binaries OpenSSH - http://www.openssh.com/
2025 Python pexpect module -
2026 http://pexpect.sourceforge.net/pexpect.html
2027
2028 swift backend (OpenStack Object Storage)
2029 Python swiftclient module - https://github.com/openstack/python-
2030 swiftclient/
2031 Python keystoneclient module -
2032 https://github.com/openstack/python-keystoneclient/
2033
2034 webdav backend
2035 certificate authority database file for ssl certificate
2036 verification of HTTPS connections -
2037 http://curl.haxx.se/docs/caextract.html
2038 (also see A NOTE ON SSL CERTIFICATE VERIFICATION).
2039 Python kerberos module for kerberos authentication -
2040 https://github.com/02strich/pykerberos
2041
2042 MediaFire backend
2043 MediaFire Python Open SDK -
2044 https://pypi.python.org/pypi/mediafire/
2045
2046
2048 Original Author - Ben Escoto <bescoto@stanford.edu>
2049
2050 Current Maintainer - Kenneth Loafman <kenneth@loafman.com>
2051
2052 Continuous Contributors
2053 Edgar Soldin, Mike Terry
2054
2055 Most backends were contributed individually. Information about their
2056 authorship may be found in the according file's header.
2057
2058 Also we'd like to thank everybody posting issues to the mailing list or
2059 on launchpad, sending in patches or contributing otherwise. Duplicity
2060 wouldn't be as stable and useful if it weren't for you.
2061
2062 A special thanks goes to rsync.net, a Cloud Storage provider with
2063 explicit support for duplicity, for several monetary donations and for
2064 providing a special "duplicity friends" rate for their offsite backup
2065 service. Email info@rsync.net for details.
2066
2067
2069 rdiffdir(1), python(1), rdiff(1), rdiff-backup(1).
2070
2071
2072
2073Version 0.8.18 January 09, 2021 DUPLICITY(1)