1DUPLICITY(1) User Manuals DUPLICITY(1)
2
3
4
6 duplicity - Encrypted incremental backup to local or remote storage.
7
8
10 For detailed descriptions for each command see chapter ACTIONS.
11
12 duplicity [full|incremental] [options] source_directory target_url
13
14 duplicity verify [options] [--compare-data] [--file-to-restore
15 <relpath>] [--time time] source_url target_directory
16
17 duplicity collection-status [options] [--file-changed <relpath>]
18 target_url
19
20 duplicity list-current-files [options] [--time time] target_url
21
22 duplicity [restore] [options] [--file-to-restore <relpath>] [--time
23 time] source_url target_directory
24
25 duplicity remove-older-than <time> [options] [--force] target_url
26
27 duplicity remove-all-but-n-full <count> [options] [--force] target_url
28
29 duplicity remove-all-inc-of-but-n-full <count> [options] [--force]
30 target_url
31
32 duplicity cleanup [options] [--force] target_url
33
34 duplicity replicate [options] [--time time] source_url target_url
35
36
38 Duplicity incrementally backs up files and folders into tar-format
39 volumes encrypted with GnuPG and places them to a remote (or local)
40 storage backend. See chapter URL FORMAT for a list of all supported
41 backends and how to address them. Because duplicity uses librsync,
42 incremental backups are space efficient and only record the parts of
43 files that have changed since the last backup. Currently duplicity
44 supports deleted files, full Unix permissions, uid/gid, directories,
45 symbolic links, fifos, etc., but not hard links.
46
47 If you are backing up the root directory /, remember to --exclude
48 /proc, or else duplicity will probably crash on the weird stuff in
49 there.
50
51
53 Here is an example of a backup, using sftp to back up /home/me to
54 some_dir on the other.host machine:
55
56 duplicity /home/me sftp://uid@other.host/some_dir
57
58 If the above is run repeatedly, the first will be a full backup, and
59 subsequent ones will be incremental. To force a full backup, use the
60 full action:
61
62 duplicity full /home/me sftp://uid@other.host/some_dir
63
64 or enforcing a full every other time via --full-if-older-than <time> ,
65 e.g. a full every month:
66
67 duplicity --full-if-older-than 1M /home/me
68 sftp://uid@other.host/some_dir
69
70 Now suppose we accidentally delete /home/me and want to restore it the
71 way it was at the time of last backup:
72
73 duplicity sftp://uid@other.host/some_dir /home/me
74
75 Duplicity enters restore mode because the URL comes before the local
76 directory. If we wanted to restore just the file "Mail/article" in
77 /home/me as it was three days ago into /home/me/restored_file:
78
79 duplicity -t 3D --file-to-restore Mail/article
80 sftp://uid@other.host/some_dir /home/me/restored_file
81
82 The following command compares the latest backup with the current
83 files:
84
85 duplicity verify sftp://uid@other.host/some_dir /home/me
86
87 Finally, duplicity recognizes several include/exclude options. For
88 instance, the following will backup the root directory, but exclude
89 /mnt, /tmp, and /proc:
90
91 duplicity --exclude /mnt --exclude /tmp --exclude /proc /
92 file:///usr/local/backup
93
94 Note that in this case the destination is the local directory
95 /usr/local/backup. The following will backup only the /home and /etc
96 directories under root:
97
98 duplicity --include /home --include /etc --exclude '**' /
99 file:///usr/local/backup
100
101 Duplicity can also access a repository via ftp. If a user name is
102 given, the environment variable FTP_PASSWORD is read to determine the
103 password:
104
105 FTP_PASSWORD=mypassword duplicity /local/dir
106 ftp://user@other.host/some_dir
107
108
110 Duplicity knows action commands, which can be finetuned with options.
111 The actions for backup (full,incr) and restoration (restore) can as
112 well be left out as duplicity detects in what mode it should switch to
113 by the order of target URL and local folder. If the target URL comes
114 before the local folder a restore is in order, is the local folder
115 before target URL then this folder is about to be backed up to the
116 target URL.
117 If a backup is in order and old signatures can be found duplicity
118 automatically performs an incremental backup.
119
120 Note: The following explanations explain some but not all options that
121 can be used in connection with that action command. Consult the
122 OPTIONS section for more detailed informations.
123
124
125 full <folder> <url>
126 Perform a full backup. A new backup chain is started even if
127 signatures are available for an incremental backup.
128
129
130 incr <folder> <url>
131 If this is requested an incremental backup will be performed.
132 Duplicity will abort if no old signatures can be found.
133
134
135 verify [--compare-data] [--time <time>] [--file-to-restore <rel_path>]
136 <url> <local_path>
137 Verify tests the integrity of the backup archives at the remote
138 location by downloading each file and checking both that it can
139 restore the archive and that the restored file matches the
140 signature of that file stored in the backup, i.e. compares the
141 archived file with its hash value from archival time. Verify
142 does not actually restore and will not overwrite any local
143 files. Duplicity will exit with a non-zero error level if any
144 files do not match the signature stored in the archive for that
145 file. On verbosity level 4 or higher, it will log a message for
146 each file that differs from the stored signature. Files must be
147 downloaded to the local machine in order to compare them.
148 Verify does not compare the backed-up version of the file to the
149 current local copy of the files unless the --compare-data option
150 is used (see below).
151 The --file-to-restore option restricts verify to that file or
152 folder. The --time option allows to select a backup to verify.
153 The --compare-data option enables data comparison (see below).
154
155
156 collection-status [--file-changed <relpath>]<url>
157 Summarize the status of the backup repository by printing the
158 chains and sets found, and the number of volumes in each.
159
160
161 list-current-files [--time <time>] <url>
162 Lists the files contained in the most current backup or backup
163 at time. The information will be extracted from the signature
164 files, not the archive data itself. Thus the whole archive does
165 not have to be downloaded, but on the other hand if the archive
166 has been deleted or corrupted, this command will not detect it.
167
168
169 restore [--file-to-restore <relpath>] [--time <time>] <url>
170 <target_folder>
171 You can restore the full monty or selected folders/files from a
172 specific time. Use the relative path as it is printed by list-
173 current-files. Usually not needed as duplicity enters restore
174 mode when it detects that the URL comes before the local folder.
175
176
177 remove-older-than <time> [--force] <url>
178 Delete all backup sets older than the given time. Old backup
179 sets will not be deleted if backup sets newer than time depend
180 on them. See the TIME FORMATS section for more information.
181 Note, this action cannot be combined with backup or other
182 actions, such as cleanup. Note also that --force will be needed
183 to delete the files instead of just listing them.
184
185
186 remove-all-but-n-full <count> [--force] <url>
187 Delete all backups sets that are older than the count:th last
188 full backup (in other words, keep the last count full backups
189 and associated incremental sets). count must be larger than
190 zero. A value of 1 means that only the single most recent backup
191 chain will be kept. Note that --force will be needed to delete
192 the files instead of just listing them.
193
194
195 remove-all-inc-of-but-n-full <count> [--force] <url>
196 Delete incremental sets of all backups sets that are older than
197 the count:th last full backup (in other words, keep only old
198 full backups and not their increments). count must be larger
199 than zero. A value of 1 means that only the single most recent
200 backup chain will be kept intact. Note that --force will be
201 needed to delete the files instead of just listing them.
202
203
204 cleanup [--force] <url>
205 Delete the extraneous duplicity files on the given backend.
206 Non-duplicity files, or files in complete data sets will not be
207 deleted. This should only be necessary after a duplicity
208 session fails or is aborted prematurely. Note that --force will
209 be needed to delete the files instead of just listing them.
210
211
212 replicate [--time time] <source_url> <target_url>
213 Replicate backup sets from source to target backend. Files will
214 be (re)-encrypted and (re)-compressed depending on normal
215 backend options. Signatures and volumes will not get recomputed,
216 thus options like --volsize or --max-blocksize have no effect.
217 When --time time is given, only backup sets older than time will
218 be replicated.
219
220
222 --allow-source-mismatch
223 Do not abort on attempts to use the same archive dir or remote
224 backend to back up different directories. duplicity will tell
225 you if you need this switch.
226
227
228 --archive-dir path
229 The archive directory. NOTE: This option changed in 0.6.0. The
230 archive directory is now necessary in order to manage
231 persistence for current and future enhancements. As such, this
232 option is now used only to change the location of the archive
233 directory. The archive directory should not be deleted, or
234 duplicity will have to recreate it from the remote repository
235 (which may require decrypting the backup contents).
236
237 When backing up or restoring, this option specifies that the
238 local archive directory is to be created in path. If the
239 archive directory is not specified, the default will be to
240 create the archive directory in ~/.cache/duplicity/.
241
242 The archive directory can be shared between backups to multiple
243 targets, because a subdirectory of the archive dir is used for
244 individual backups (see --name ).
245
246 The combination of archive directory and backup name must be
247 unique in order to separate the data of different backups.
248
249 The interaction between the --archive-dir and the --name options
250 allows for four possible combinations for the location of the
251 archive dir:
252
253
254 1. neither specified (default)
255 ~/.cache/duplicity/hash-of-url
256
257 2. --archive-dir=/arch, no --name
258 /arch/hash-of-url
259
260 3. no --archive-dir, --name=foo
261 ~/.cache/duplicity/foo
262
263 4. --archive-dir=/arch, --name=foo
264 /arch/foo
265
266
267 --asynchronous-upload
268 (EXPERIMENTAL) Perform file uploads asynchronously in the
269 background, with respect to volume creation. This means that
270 duplicity can upload a volume while, at the same time, preparing
271 the next volume for upload. The intended end-result is a faster
272 backup, because the local CPU and your bandwidth can be more
273 consistently utilized. Use of this option implies additional
274 need for disk space in the temporary storage location; rather
275 than needing to store only one volume at a time, enough storage
276 space is required to store two volumes.
277
278
279 --backend-retry-delay number
280 Specifies the number of seconds that duplicity waits after an
281 error has occured before attempting to repeat the operation.
282
283
284
285 --cf-backend backend
286 Allows the explicit selection of a cloudfiles backend. Defaults
287 to pyrax. Alternatively you might choose cloudfiles.
288
289
290 --compare-data
291 Enable data comparison of regular files on action verify. This
292 conducts a verify as described above to verify the integrity of
293 the backup archives, but additionally compares restored files to
294 those in target_directory. Duplicity will not replace any files
295 in target_directory. Duplicity will exit with a non-zero error
296 level if the files do not correctly verify or if any files from
297 the archive differ from those in target_directory. On verbosity
298 level 4 or higher, it will log a message for each file that
299 differs from its equivalent in target_directory.
300
301
302 --copy-links
303 Resolve symlinks during backup. Enabling this will resolve &
304 back up the symlink's file/folder data instead of the symlink
305 itself, potentially increasing the size of the backup.
306
307
308 --dry-run
309 Calculate what would be done, but do not perform any backend
310 actions
311
312
313 --encrypt-key key-id
314 When backing up, encrypt to the given public key, instead of
315 using symmetric (traditional) encryption. Can be specified
316 multiple times. The key-id can be given in any of the formats
317 supported by GnuPG; see gpg(1), section "HOW TO SPECIFY A USER
318 ID" for details.
319
320
321
322 --encrypt-secret-keyring filename
323 This option can only be used with --encrypt-key, and changes the
324 path to the secret keyring for the encrypt key to filename This
325 keyring is not used when creating a backup. If not specified,
326 the default secret keyring is used which is usually located at
327 .gnupg/secring.gpg
328
329
330 --encrypt-sign-key key-id
331 Convenience parameter. Same as --encrypt-key key-id --sign-key
332 key-id.
333
334
335 --exclude shell_pattern
336 Exclude the file or files matched by shell_pattern. If a
337 directory is matched, then files under that directory will also
338 be matched. See the FILE SELECTION section for more
339 information.
340
341
342 --exclude-device-files
343 Exclude all device files. This can be useful for
344 security/permissions reasons or if duplicity is not handling
345 device files correctly.
346
347
348 --exclude-filelist filename
349 Excludes the files listed in filename, with each line of the
350 filelist interpreted according to the same rules as --include
351 and --exclude. See the FILE SELECTION section for more
352 information.
353
354
355 --exclude-if-present filename
356 Exclude directories if filename is present. Allows the user to
357 specify folders that they do not wish to backup by adding a
358 specified file (e.g. ".nobackup") instead of maintaining a
359 comprehensive exclude/include list.
360
361
362 --exclude-older-than time
363 Exclude any files whose modification date is earlier than the
364 specified time. This can be used to produce a partial backup
365 that contains only recently changed files. See the TIME FORMATS
366 section for more information.
367
368
369 --exclude-other-filesystems
370 Exclude files on file systems (identified by device number)
371 other than the file system the root of the source directory is
372 on.
373
374
375 --exclude-regexp regexp
376 Exclude files matching the given regexp. Unlike the --exclude
377 option, this option does not match files in a directory it
378 matches. See the FILE SELECTION section for more information.
379
380
381 --file-prefix, --file-prefix-manifest, --file-prefix-archive, --file-
382 prefix-signature
383 Adds a prefix to all files, manifest files, archive files,
384 and/or signature files.
385
386 The same set of prefixes must be passed in on backup and
387 restore.
388
389 If both global and type-specific prefixes are set, global prefix
390 will go before type-specific prefixes.
391
392 See also A NOTE ON FILENAME PREFIXES
393
394
395 --file-to-restore path
396 This option may be given in restore mode, causing only path to
397 be restored instead of the entire contents of the backup
398 archive. path should be given relative to the root of the
399 directory backed up.
400
401
402 --full-if-older-than time
403 Perform a full backup if an incremental backup is requested, but
404 the latest full backup in the collection is older than the given
405 time. See the TIME FORMATS section for more information.
406
407
408 --force
409 Proceed even if data loss might result. Duplicity will let the
410 user know when this option is required.
411
412
413 --ftp-passive
414 Use passive (PASV) data connections. The default is to use
415 passive, but to fallback to regular if the passive connection
416 fails or times out.
417
418
419 --ftp-regular
420 Use regular (PORT) data connections.
421
422
423 --gio Use the GIO backend and interpret any URLs as GIO would.
424
425
426 --hidden-encrypt-key key-id
427 Same as --encrypt-key, but it hides user's key id from encrypted
428 file. It uses the gpg's --hidden-recipient command to obfuscate
429 the owner of the backup. On restore, gpg will automatically try
430 all available secret keys in order to decrypt the backup. See
431 gpg(1) for more details.
432
433
434
435 --ignore-errors
436 Try to ignore certain errors if they happen. This option is only
437 intended to allow the restoration of a backup in the face of
438 certain problems that would otherwise cause the backup to fail.
439 It is not ever recommended to use this option unless you have a
440 situation where you are trying to restore from backup and it is
441 failing because of an issue which you want duplicity to ignore.
442 Even then, depending on the issue, this option may not have an
443 effect.
444
445 Please note that while ignored errors will be logged, there will
446 be no summary at the end of the operation to tell you what was
447 ignored, if anything. If this is used for emergency restoration
448 of data, it is recommended that you run the backup in such a way
449 that you can revisit the backup log (look for lines containing
450 the string IGNORED_ERROR).
451
452 If you ever have to use this option for reasons that are not
453 understood or understood but not your own responsibility, please
454 contact duplicity maintainers. The need to use this option under
455 production circumstances would normally be considered a bug.
456
457
458 --imap-full-address email_address
459 The full email address of the user name when logging into an
460 imap server. If not supplied just the user name part of the
461 email address is used.
462
463
464 --imap-mailbox option
465 Allows you to specify a different mailbox. The default is
466 "INBOX". Other languages may require a different mailbox than
467 the default.
468
469
470 --gpg-binary file_path
471 Allows you to force duplicity to use file_path as gpg command
472 line binary. Can be an absolute or relative file path or a file
473 name. Default value is 'gpg'. The binary will be localized via
474 the PATH environment variable.
475
476
477 --gpg-options options
478 Allows you to pass options to gpg encryption. The options list
479 should be of the form "--opt1 --opt2=parm" where the string is
480 quoted and the only spaces allowed are between options.
481
482
483 --include shell_pattern
484 Similar to --exclude but include matched files instead. Unlike
485 --exclude, this option will also match parent directories of
486 matched files (although not necessarily their contents). See
487 the FILE SELECTION section for more information.
488
489
490 --include-filelist filename
491 Like --exclude-filelist, but include the listed files instead.
492 See the FILE SELECTION section for more information.
493
494
495 --include-regexp regexp
496 Include files matching the regular expression regexp. Only
497 files explicitly matched by regexp will be included by this
498 option. See the FILE SELECTION section for more information.
499
500
501 --log-fd number
502 Write specially-formatted versions of output messages to the
503 specified file descriptor. The format used is designed to be
504 easily consumable by other programs.
505
506
507 --log-file filename
508 Write specially-formatted versions of output messages to the
509 specified file. The format used is designed to be easily
510 consumable by other programs.
511
512
513 --max-blocksize number
514 determines the number of the blocks examined for changes during
515 the diff process. For files < 1MB the blocksize is a constant
516 of 512. For files over 1MB the size is given by:
517
518 file_blocksize = int((file_len / (2000 * 512)) * 512)
519 return min(file_blocksize, globals.max_blocksize)
520
521 where globals.max_blocksize defaults to 2048. If you specify a
522 larger max_blocksize, your difftar files will be larger, but
523 your sigtar files will be smaller. If you specify a smaller
524 max_blocksize, the reverse occurs. The --max-blocksize option
525 should be in multiples of 512.
526
527
528 --name symbolicname
529 Set the symbolic name of the backup being operated on. The
530 intent is to use a separate name for each logically distinct
531 backup. For example, someone may use "home_daily_s3" for the
532 daily backup of a home directory to Amazon S3. The structure of
533 the name is up to the user, it is only important that the names
534 be distinct. The symbolic name is currently only used to affect
535 the expansion of --archive-dir , but may be used for additional
536 features in the future. Users running more than one distinct
537 backup are encouraged to use this option.
538
539 If not specified, the default value is a hash of the backend
540 URL.
541
542
543 --no-compression
544 Do not use GZip to compress files on remote system.
545
546
547 --no-encryption
548 Do not use GnuPG to encrypt files on remote system.
549
550
551 --no-print-statistics
552 By default duplicity will print statistics about the current
553 session after a successful backup. This switch disables that
554 behavior.
555
556
557 --null-separator
558 Use nulls (\0) instead of newlines (\n) as line separators,
559 which may help when dealing with filenames containing newlines.
560 This affects the expected format of the files specified by the
561 --{include|exclude}-filelist switches as well as the format of
562 the directory statistics file.
563
564
565 --numeric-owner
566 On restore always use the numeric uid/gid from the archive and
567 not the archived user/group names, which is the default
568 behaviour. Recommended for restoring from live cds which might
569 have the users with identical names but different uids/gids.
570
571
572 --num-retries number
573 Number of retries to make on errors before giving up.
574
575
576 --old-filenames
577 Use the old filename format (incompatible with Windows/Samba)
578 rather than the new filename format.
579
580
581 --par2-options options
582 Verbatim options to pass to par2.
583
584
585 --par2-redundancy percent
586 Adjust the level of redundancy in percent for Par2 recovery
587 files (default 10%).
588
589
590 --progress
591 When selected, duplicity will output the current upload progress
592 and estimated upload time. To annotate changes, it will perform
593 a first dry-run before a full or incremental, and then runs the
594 real operation estimating the real upload progress.
595
596
597 --progress-rate number
598 Sets the update rate at which duplicity will output the upload
599 progress messages (requires --progress option). Default is to
600 print the status each 3 seconds.
601
602
603 --rename <original path> <new path>
604 Treats the path orig in the backup as if it were the path new.
605 Can be passed multiple times. An example:
606
607 duplicity restore --rename Documents/metal Music/metal
608 sftp://uid@other.host/some_dir /home/me
609
610
611 --rsync-options options
612 Allows you to pass options to the rsync backend. The options
613 list should be of the form "opt1=parm1 opt2=parm2" where the
614 option string is quoted and the only spaces allowed are between
615 options. The option string will be passed verbatim to rsync,
616 after any internally generated option designating the remote
617 port to use. Here is a possibly useful example:
618
619 duplicity --rsync-options="--partial-dir=.rsync-partial"
620 /home/me rsync://uid@other.host/some_dir
621
622
623 --s3-european-buckets
624 When using the Amazon S3 backend, create buckets in Europe
625 instead of the default (requires --s3-use-new-style ). Also see
626 the EUROPEAN S3 BUCKETS section.
627
628 This option does not apply when using the newer boto3 backend,
629 which does not create buckets.
630
631 See also A NOTE ON AMAZON S3 below.
632
633
634 --s3-unencrypted-connection
635 Don't use SSL for connections to S3.
636
637 This may be much faster, at some cost to confidentiality.
638
639 With this option, anyone who can observe traffic between your
640 computer and S3 will be able to tell: that you are using
641 Duplicity, the name of the bucket, your AWS Access Key ID, the
642 increment dates and the amount of data in each increment.
643
644 This option affects only the connection, not the GPG encryption
645 of the backup increment files. Unless that is disabled, an
646 observer will not be able to see the file names or contents.
647
648 This option is not available when using the newer boto3 backend.
649
650 See also A NOTE ON AMAZON S3 below.
651
652
653 --s3-use-new-style
654 When operating on Amazon S3 buckets, use new-style subdomain
655 bucket addressing. This is now the preferred method to access
656 Amazon S3, but is not backwards compatible if your bucket name
657 contains upper-case characters or other characters that are not
658 valid in a hostname.
659
660 This option has no effect when using the newer boto3 backend,
661 which will always use new style subdomain bucket naming.
662
663 See also A NOTE ON AMAZON S3 below.
664
665
666 --s3-use-rrs
667 Store volumes using Reduced Redundancy Storage when uploading to
668 Amazon S3. This will lower the cost of storage but also lower
669 the durability of stored volumes to 99.99% instead the
670 99.999999999% durability offered by Standard Storage on S3.
671
672
673 --s3-use-ia
674 Store volumes using Standard - Infrequent Access when uploading
675 to Amazon S3. This storage class has a lower storage cost but a
676 higher per-request cost, and the storage cost is calculated
677 against a 30-day storage minimum. According to Amazon, this
678 storage is ideal for long-term file storage, backups, and
679 disaster recovery.
680
681
682 --s3-use-onezone-ia
683 Store volumes using One Zone - Infrequent Access when uploading
684 to Amazon S3. This storage is similar to Standard - Infrequent
685 Access, but only stores object data in one Availability Zone.
686
687
688 --s3-use-glacier
689 Store volumes using Glacier S3 when uploading to Amazon S3. This
690 storage class has a lower cost of storage but a higher per-
691 request cost along with delays of up to 12 hours from the time
692 of retrieval request. This storage cost is calculated against a
693 90-day storage minimum. According to Amazon this storage is
694 ideal for data archiving and long-term backup offering
695 99.999999999% durability. To restore a backup you will have to
696 manually migrate all data stored on AWS Glacier back to Standard
697 S3 and wait for AWS to complete the migration. Notice:
698 Duplicity will store the manifest.gpg files from full and
699 incremental backups on AWS S3 standard storage to allow quick
700 retrieval for later incremental backups, all other data is
701 stored in S3 Glacier.
702
703
704 --s3-use-deep-archive
705 Store volumes using Glacier Deep Archive S3 when uploading to
706 Amazon S3. This storage class has a lower cost of storage but a
707 higher per-request cost along with delays of up to 48 hours from
708 the time of retrieval request. This storage cost is calculated
709 against a 180-day storage minimum. According to Amazon this
710 storage is ideal for data archiving and long-term backup
711 offering 99.999999999% durability. To restore a backup you will
712 have to manually migrate all data stored on AWS Glacier Deep
713 Archive back to Standard S3 and wait for AWS to complete the
714 migration. Notice: Duplicity will store the manifest.gpg files
715 from full and incremental backups on AWS S3 standard storage to
716 allow quick retrieval for later incremental backups, all other
717 data is stored in S3 Glacier Deep Archive.
718
719 Glacier Deep Archive is only available when using the newer
720 boto3 backend.
721
722
723 --s3-use-multiprocessing
724 Allow multipart volumne uploads to S3 through multiprocessing.
725 This option requires Python 2.6 and can be used to make uploads
726 to S3 more efficient. If enabled, files duplicity uploads to S3
727 will be split into chunks and uploaded in parallel. Useful if
728 you want to saturate your bandwidth or if large files are
729 failing during upload.
730
731 This has no effect when using the newer boto3 backend. Boto3
732 always attempts to multiprocessing when it is believed it will
733 be more efficient.
734
735 See also A NOTE ON AMAZON S3 below.
736
737
738 --s3-use-server-side-encryption
739 Allow use of server side encryption in S3
740
741
742 --s3-multipart-chunk-size
743 Chunk size (in MB) used for S3 multipart uploads. Make this
744 smaller than --volsize to maximize the use of your bandwidth.
745 For example, a chunk size of 10MB with a volsize of 30MB will
746 result in 3 chunks per volume upload.
747
748 This has no effect when using the newer boto3 backend.
749
750 See also A NOTE ON AMAZON S3 below.
751
752
753 --s3-multipart-max-procs
754 Specify the maximum number of processes to spawn when performing
755 a multipart upload to S3. By default, this will choose the
756 number of processors detected on your system (e.g. 4 for a
757 4-core system). You can adjust this number as required to ensure
758 you don't overload your system while maximizing the use of your
759 bandwidth.
760
761 This has no effect when using the newer boto3 backend.
762
763 See also A NOTE ON AMAZON S3 below.
764
765
766 --s3-multipart-max-timeout
767 You can control the maximum time (in seconds) a multipart upload
768 can spend on uploading a single chunk to S3. This may be useful
769 if you find your system hanging on multipart uploads or if you'd
770 like to control the time variance when uploading to S3 to ensure
771 you kill connections to slow S3 endpoints.
772
773 This has no effect when using the newer boto3 backend.
774
775 See also A NOTE ON AMAZON S3 below.
776
777
778 --azure-blob-tier
779 Standard storage tier used for backup files (Hot|Cool|Archive).
780
781
782 --azure-max-single-put-size
783 Specify the number of the largest supported upload size where
784 the Azure library makes only one put call. If the content size
785 is known and below this value the Azure library will only
786 perform one put request to upload one block. The number is
787 expected to be in bytes.
788
789
790 --azure-max-block-size
791 Specify the number for the block size used by the Azure library
792 to upload blobs if it is split into multiple blocks. The
793 maximum block size the service supports is 104857600 (100MiB)
794 and the default is 4194304 (4MiB)
795
796
797 --azure-max-connections
798 Specify the number of maximum connections to transfer one blob
799 to Azure blob size exceeds 64MB. The default values is 2.
800
801
802 --scp-command command
803 (only ssh pexpect backend with --use-scp enabled) The command
804 will be used instead of "scp" to send or receive files. To list
805 and delete existing files, the sftp command is used.
806 See also A NOTE ON SSH BACKENDS section SSH pexpect backend.
807
808
809 --sftp-command command
810 (only ssh pexpect backend) The command will be used instead of
811 "sftp".
812 See also A NOTE ON SSH BACKENDS section SSH pexpect backend.
813
814
815 --short-filenames
816 If this option is specified, the names of the files duplicity
817 writes will be shorter (about 30 chars) but less understandable.
818 This may be useful when backing up to MacOS or another OS or FS
819 that doesn't support long filenames.
820
821
822 --sign-key key-id
823 This option can be used when backing up, restoring or verifying.
824 When backing up, all backup files will be signed with keyid key.
825 When restoring, duplicity will signal an error if any remote
826 file is not signed with the given key-id. The key-id can be
827 given in any of the formats supported by GnuPG; see gpg(1),
828 section "HOW TO SPECIFY A USER ID" for details. Should be
829 specified only once because currently only one signing key is
830 supported. Last entry overrides all other entries.
831 See also A NOTE ON SYMMETRIC ENCRYPTION AND SIGNING
832
833
834 --ssh-askpass
835 Tells the ssh backend to prompt the user for the remote system
836 password, if it was not defined in target url and no
837 FTP_PASSWORD env var is set. This password is also used for
838 passphrase-protected ssh keys.
839
840
841 --ssh-options options
842 Allows you to pass options to the ssh backend. Can be specified
843 multiple times or as a space separated options list. The
844 options list should be of the form "-oOpt1='parm1'
845 -oOpt2='parm2'" where the option string is quoted and the only
846 spaces allowed are between options. The option string will be
847 passed verbatim to both scp and sftp, whose command line syntax
848 differs slightly hence the options should therefore be given in
849 the long option format described in ssh_config(5).
850
851 example of a list:
852
853 duplicity --ssh-options="-oProtocol=2
854 -oIdentityFile='/my/backup/id'" /home/me
855 scp://user@host/some_dir
856
857 example with multiple parameters:
858
859 duplicity --ssh-options="-oProtocol=2" --ssh-
860 options="-oIdentityFile='/my/backup/id'" /home/me
861 scp://user@host/some_dir
862
863 NOTE: The ssh paramiko backend currently supports only the -i or
864 -oIdentityFile setting. If needed provide more host specific
865 options via ssh_config file.
866
867
868 --ssl-cacert-file file
869 (only webdav & lftp backend) Provide a cacert file for ssl
870 certificate verification.
871 See also A NOTE ON SSL CERTIFICATE VERIFICATION.
872
873
874 --ssl-cacert-path path/to/certs/
875 (only webdav backend and python 2.7.9+ OR lftp+webdavs and a
876 recent lftp) Provide a path to a folder containing cacert files
877 for ssl certificate verification.
878 See also A NOTE ON SSL CERTIFICATE VERIFICATION.
879
880
881 --ssl-no-check-certificate
882 (only webdav & lftp backend) Disable ssl certificate
883 verification.
884 See also A NOTE ON SSL CERTIFICATE VERIFICATION.
885
886
887 --swift-storage-policy
888 Use this storage policy when operating on Swift containers.
889 See also A NOTE ON SWIFT (OPENSTACK OBJECT STORAGE) ACCESS.
890
891
892 --metadata-sync-mode mode
893 This option defaults to 'partial', but you can set it to 'full'
894 Use 'partial' to avoid syncing metadata for backup chains that
895 you are not going to use. This saves time when restoring for
896 the first time, and lets you restore an old backup that was
897 encrypted with a different passphrase by supplying only the
898 target passphrase.
899 Use 'full' to sync metadata for all backup chains on the remote.
900
901
902 --tempdir directory
903 Use this existing directory for duplicity temporary files
904 instead of the system default, which is usually the /tmp
905 directory. This option supersedes any environment variable.
906 See also ENVIRONMENT VARIABLES.
907
908
909 -ttime, --time time, --restore-time time
910 Specify the time from which to restore or list files.
911
912
913 --time-separator char
914 Use char as the time separator in filenames instead of colon
915 (":").
916
917
918 --timeout seconds
919 Use seconds as the socket timeout value if duplicity begins to
920 timeout during network operations. The default is 30 seconds.
921
922
923 --use-agent
924 If this option is specified, then --use-agent is passed to the
925 GnuPG encryption process and it will try to connect to gpg-agent
926 before it asks for a passphrase for --encrypt-key or --sign-key
927 if needed.
928 Note: Contrary to previous versions of duplicity, this option
929 will also be honored by GnuPG 2 and newer versions. If GnuPG 2
930 is in use, duplicity passes the option --pinentry-mode=loopback
931 to the the gpg process unless --use-agent is specified on the
932 duplicity command line. This has the effect that GnuPG 2 uses
933 the agent only if --use-agent is given, just like GnuPG 1.
934
935
936 --verbosity level, -vlevel
937 Specify output verbosity level (log level). Named levels and
938 corresponding values are 0 Error, 2 Warning, 4 Notice (default),
939 8 Info, 9 Debug (noisiest).
940 level may also be
941 a character: e, w, n, i, d
942 a word: error, warning, notice, info, debug
943
944 The options -v4, -vn and -vnotice are functionally equivalent,
945 as are the mixed/upper-case versions -vN, -vNotice and -vNOTICE.
946
947
948 --version
949 Print duplicity's version and quit.
950
951
952 --volsize number
953 Change the volume size to number MB. Default is 200MB.
954
955
957 TMPDIR, TEMP, TMP
958 In decreasing order of importance, specifies the directory to
959 use for temporary files (inherited from Python's tempfile
960 module). Eventually the option --tempdir supercedes any of
961 these.
962
963 FTP_PASSWORD
964 Supported by most backends which are password capable. More
965 secure than setting it in the backend url (which might be
966 readable in the operating systems process listing to other users
967 on the same machine).
968
969 PASSPHRASE
970 This passphrase is passed to GnuPG. If this is not set, the user
971 will be prompted for the passphrase.
972
973 SIGN_PASSPHRASE
974 The passphrase to be used for --sign-key. If ommitted and sign
975 key is also one of the keys to encrypt against PASSPHRASE will
976 be reused instead. Otherwise, if passphrase is needed but not
977 set the user will be prompted for it.
978
979
981 Duplicity uses the URL format (as standard as possible) to define data
982 locations. The generic format for a URL is:
983
984 scheme://[user[:password]@]host[:port]/[/]path
985
986 It is not recommended to expose the password on the command line since
987 it could be revealed to anyone with permissions to do process listings,
988 it is permitted however. Consider setting the environment variable
989 FTP_PASSWORD instead, which is used by most, if not all backends,
990 regardless of it's name.
991
992 In protocols that support it, the path may be preceded by a single
993 slash, '/path', to represent a relative path to the target home
994 directory, or preceded by a double slash, '//path', to represent an
995 absolute filesystem path.
996
997 Note:
998 Scheme (protocol) access may be provided by more than one
999 backend. In case the default backend is buggy or simply not
1000 working in a specific case it might be worth trying an
1001 alternative implementation. Alternative backends can be
1002 selected by prefixing the scheme with the name of the
1003 alternative backend e.g. ncftp+ftp:// and are mentioned below
1004 the scheme's syntax summary.
1005
1006
1007 Formats of each of the URL schemes follow:
1008
1009
1010 Amazon Drive Backend
1011
1012 ad://some_dir
1013
1014 See also A NOTE ON AMAZON DRIVE
1015
1016 Azure
1017
1018 azure://container-name
1019
1020 See also A NOTE ON AZURE ACCESS
1021
1022 B2
1023
1024 b2://account_id[:application_key]@bucket_name/[folder/]
1025
1026 Cloud Files (Rackspace)
1027
1028 cf+http://container_name
1029
1030 See also A NOTE ON CLOUD FILES ACCESS
1031
1032 Dropbox
1033
1034 dpbx:///some_dir
1035
1036 Make sure to read A NOTE ON DROPBOX ACCESS first!
1037
1038 Local file path
1039
1040 file://[relative|/absolute]/local/path
1041
1042 FISH (Files transferred over Shell protocol) over ssh
1043
1044 fish://user[:password]@other.host[:port]/[relative|/absolute]_path
1045
1046 FTP
1047
1048 ftp[s]://user[:password]@other.host[:port]/some_dir
1049
1050 NOTE: use lftp+, ncftp+ prefixes to enforce a specific backend,
1051 default is lftp+ftp://...
1052
1053 Google Docs
1054
1055 gdocs://user[:password]@other.host/some_dir
1056
1057 NOTE: use pydrive+, gdata+ prefixes to enforce a specific
1058 backend, default is pydrive+gdocs://...
1059
1060 Google Cloud Storage
1061
1062 gs://bucket[/prefix]
1063
1064 HSI
1065
1066 hsi://user[:password]@other.host/some_dir
1067
1068 hubiC
1069
1070 cf+hubic://container_name
1071
1072 See also A NOTE ON HUBIC
1073
1074 IMAP email storage
1075
1076 imap[s]://user[:password]@host.com[/from_address_prefix]
1077
1078 See also A NOTE ON IMAP
1079
1080 Mega cloud storage
1081
1082 mega://user[:password]@mega.co.nz/some_dir
1083
1084 OneDrive Backend
1085
1086 onedrive://some_dir
1087
1088 Par2 Wrapper Backend
1089
1090 par2+scheme://[user[:password]@]host[:port]/[/]path
1091
1092 See also A NOTE ON PAR2 WRAPPER BACKEND
1093
1094 Rclone Backend
1095
1096 rclone://remote:/some_dir
1097
1098 See also A NOTE ON RCLONE BACKEND
1099
1100 Rsync via daemon
1101
1102 rsync://user[:password]@host.com[:port]::[/]module/some_dir
1103
1104 Rsync over ssh (only key auth)
1105
1106 rsync://user@host.com[:port]/[relative|/absolute]_path
1107
1108 S3 storage (Amazon)
1109
1110 s3://host[:port]/bucket_name[/prefix]
1111 s3+http://bucket_name[/prefix]
1112 defaults to the legacy boto backend based on boto v2 (last
1113 update 2018/07)
1114 alternatively try the newer boto3+s3://bucket_name[/prefix]
1115
1116 For details see A NOTE ON AMAZON S3 and see also A NOTE ON
1117 EUROPEAN S3 BUCKETS below.
1118
1119 SCP/SFTP access
1120
1121 scp://.. or
1122 sftp://user[:password]@other.host[:port]/[relative|/absolute]_path
1123
1124 defaults are paramiko+scp:// and paramiko+sftp://
1125 alternatively try pexpect+scp://, pexpect+sftp://, lftp+sftp://
1126 See also --ssh-askpass, --ssh-options and A NOTE ON SSH
1127 BACKENDS.
1128
1129 Swift (Openstack)
1130
1131 swift://container_name[/prefix]
1132
1133 See also A NOTE ON SWIFT (OPENSTACK OBJECT STORAGE) ACCESS
1134
1135 Public Cloud Archive (OVH)
1136
1137 pca://container_name[/prefix]
1138
1139 See also A NOTE ON PCA ACCESS
1140
1141 Tahoe-LAFS
1142
1143 tahoe://alias/directory
1144
1145 WebDAV
1146
1147 webdav[s]://user[:password]@other.host[:port]/some_dir
1148
1149 alternatively try lftp+webdav[s]://
1150
1151 pydrive
1152
1153 pydrive://<service account' email
1154 address>@developer.gserviceaccount.com/some_dir
1155
1156 See also A NOTE ON PYDRIVE BACKEND below.
1157
1158 multi
1159
1160 multi:///path/to/config.json
1161
1162 See also A NOTE ON MULTI BACKEND below.
1163
1164 MediaFire
1165
1166 mf://user[:password]@mediafire.com/some_dir
1167
1168 See also A NOTE ON MEDIAFIRE BACKEND below.
1169
1170
1172 duplicity uses time strings in two places. Firstly, many of the files
1173 duplicity creates will have the time in their filenames in the w3
1174 datetime format as described in a w3 note at http://www.w3.org/TR/NOTE-
1175 datetime. Basically they look like "2001-07-15T04:09:38-07:00", which
1176 means what it looks like. The "-07:00" section means the time zone is
1177 7 hours behind UTC.
1178
1179 Secondly, the -t, --time, and --restore-time options take a time
1180 string, which can be given in any of several formats:
1181
1182 1. the string "now" (refers to the current time)
1183
1184 2. a sequences of digits, like "123456890" (indicating the time in
1185 seconds after the epoch)
1186
1187 3. A string like "2002-01-25T07:00:00+02:00" in datetime format
1188
1189 4. An interval, which is a number followed by one of the characters
1190 s, m, h, D, W, M, or Y (indicating seconds, minutes, hours,
1191 days, weeks, months, or years respectively), or a series of such
1192 pairs. In this case the string refers to the time that preceded
1193 the current time by the length of the interval. For instance,
1194 "1h78m" indicates the time that was one hour and 78 minutes ago.
1195 The calendar here is unsophisticated: a month is always 30 days,
1196 a year is always 365 days, and a day is always 86400 seconds.
1197
1198 5. A date format of the form YYYY/MM/DD, YYYY-MM-DD, MM/DD/YYYY, or
1199 MM-DD-YYYY, which indicates midnight on the day in question,
1200 relative to the current time zone settings. For instance,
1201 "2002/3/5", "03-05-2002", and "2002-3-05" all mean March 5th,
1202 2002.
1203
1204
1206 When duplicity is run, it searches through the given source directory
1207 and backs up all the files specified by the file selection system. The
1208 file selection system comprises a number of file selection conditions,
1209 which are set using one of the following command line options:
1210 --exclude
1211 --exclude-device-files
1212 --exclude-if-present
1213 --exclude-filelist
1214 --exclude-regexp
1215 --include
1216 --include-filelist
1217 --include-regexp
1218 Each file selection condition either matches or doesn't match a given
1219 file. A given file is excluded by the file selection system exactly
1220 when the first matching file selection condition specifies that the
1221 file be excluded; otherwise the file is included.
1222
1223 For instance,
1224
1225 duplicity --include /usr --exclude /usr /usr
1226 scp://user@host/backup
1227
1228 is exactly the same as
1229
1230 duplicity /usr scp://user@host/backup
1231
1232 because the include and exclude directives match exactly the same
1233 files, and the --include comes first, giving it precedence. Similarly,
1234
1235 duplicity --include /usr/local/bin --exclude /usr/local /usr
1236 scp://user@host/backup
1237
1238 would backup the /usr/local/bin directory (and its contents), but not
1239 /usr/local/doc.
1240
1241 The include, exclude, include-filelist, and exclude-filelist options
1242 accept some extended shell globbing patterns. These patterns can
1243 contain *, **, ?, and [...] (character ranges). As in a normal shell,
1244 * can be expanded to any string of characters not containing "/", ?
1245 expands to any character except "/", and [...] expands to a single
1246 character of those characters specified (ranges are acceptable). The
1247 new special pattern, **, expands to any string of characters whether or
1248 not it contains "/". Furthermore, if the pattern starts with
1249 "ignorecase:" (case insensitive), then this prefix will be removed and
1250 any character in the string can be replaced with an upper- or lowercase
1251 version of itself.
1252
1253 Remember that you may need to quote these characters when typing them
1254 into a shell, so the shell does not interpret the globbing patterns
1255 before duplicity sees them.
1256
1257 The --exclude pattern option matches a file if:
1258
1259 1. pattern can be expanded into the file's filename, or
1260 2. the file is inside a directory matched by the option.
1261
1262 Conversely, the --include pattern matches a file if:
1263
1264 1. pattern can be expanded into the file's filename, or
1265 2. the file is inside a directory matched by the option, or
1266 3. the file is a directory which contains a file matched by the
1267 option.
1268
1269 For example,
1270
1271 --exclude /usr/local
1272
1273 matches e.g. /usr/local, /usr/local/lib, and /usr/local/lib/netscape.
1274 It is the same as --exclude /usr/local --exclude '/usr/local/**'.
1275
1276 On the other hand
1277
1278 --include /usr/local
1279
1280 specifies that /usr, /usr/local, /usr/local/lib, and
1281 /usr/local/lib/netscape (but not /usr/doc) all be backed up. Thus you
1282 don't have to worry about including parent directories to make sure
1283 that included subdirectories have somewhere to go.
1284
1285 Finally,
1286
1287 --include ignorecase:'/usr/[a-z0-9]foo/*/**.py'
1288
1289 would match a file like /usR/5fOO/hello/there/world.py. If it did
1290 match anything, it would also match /usr. If there is no existing file
1291 that the given pattern can be expanded into, the option will not match
1292 /usr alone.
1293
1294 The --include-filelist, and --exclude-filelist, options also introduce
1295 file selection conditions. They direct duplicity to read in a text
1296 file (either ASCII or UTF-8), each line of which is a file
1297 specification, and to include or exclude the matching files. Lines are
1298 separated by newlines or nulls, depending on whether the --null-
1299 separator switch was given. Each line in the filelist will be
1300 interpreted as a globbing pattern the way --include and --exclude
1301 options are interpreted, except that lines starting with "+ " are
1302 interpreted as include directives, even if found in a filelist
1303 referenced by --exclude-filelist. Similarly, lines starting with "- "
1304 exclude files even if they are found within an include filelist.
1305
1306 For example, if file "list.txt" contains the lines:
1307
1308 /usr/local
1309 - /usr/local/doc
1310 /usr/local/bin
1311 + /var
1312 - /var
1313
1314 then --include-filelist list.txt would include /usr, /usr/local, and
1315 /usr/local/bin. It would exclude /usr/local/doc,
1316 /usr/local/doc/python, etc. It would also include /usr/local/man, as
1317 this is included within /user/local. Finally, it is undefined what
1318 happens with /var. A single file list should not contain conflicting
1319 file specifications.
1320
1321 Each line in the filelist will also be interpreted as a globbing
1322 pattern the way --include and --exclude options are interpreted. For
1323 instance, if the file "list.txt" contains the lines:
1324
1325 dir/foo
1326 + dir/bar
1327 - **
1328
1329 Then --include-filelist list.txt would be exactly the same as
1330 specifying --include dir/foo --include dir/bar --exclude ** on the
1331 command line.
1332
1333 Finally, the --include-regexp and --exclude-regexp options allow files
1334 to be included and excluded if their filenames match a python regular
1335 expression. Regular expression syntax is too complicated to explain
1336 here, but is covered in Python's library reference. Unlike the
1337 --include and --exclude options, the regular expression options don't
1338 match files containing or contained in matched files. So for instance
1339
1340 --include '[0-9]{7}(?!foo)'
1341
1342 matches any files whose full pathnames contain 7 consecutive digits
1343 which aren't followed by 'foo'. However, it wouldn't match /home even
1344 if /home/ben/1234567 existed.
1345
1346
1348 1. The API Keys used for Amazon Drive have not been granted
1349 production limits. Amazon do not say what the development
1350 limits are and are not replying to to requests to whitelist
1351 duplicity. A related tool, acd_cli, was demoted to development
1352 limits, but continues to work fine except for cases of excessive
1353 usage. If you experience throttling and similar issues with
1354 Amazon Drive using this backend, please report them to the
1355 mailing list.
1356
1357 2. If you previously used the acd+acdcli backend, it is strongly
1358 recommended to update to the ad backend instead, since it
1359 interfaces directly with Amazon Drive. You will need to setup
1360 the OAuth once again, but can otherwise keep your backups and
1361 config.
1362
1363
1365 When backing up to Amazon S3, two backend implementations are
1366 available. The schemes "s3" and "s3+http" are implemented using the
1367 older boto library, which has been deprecated and is no longer
1368 supported. The "boto3+s3" scheme is based on the newer boto3 library.
1369 This new backend fixes several known limitations in the older backend,
1370 which have crept in as Amazon S3 has evolved while the deprecated boto
1371 library has not kept up.
1372
1373 The boto3 backend should behave largely the same as the older S3
1374 backend, but there are some differences in the handling of some of the
1375 "S3" options. Additionally, there are some compatibility differences
1376 with the new backed. Because of these reasons, both backends have been
1377 retained for the time being. See the documentation for specific
1378 options regarding differences related to each backend.
1379
1380 The boto3 backend does not support bucket creation. This is a
1381 deliberate choice which simplifies the code, and side steps problems
1382 related to region selection. Additionally, it is probably not a good
1383 practice to give your backup role bucket creation rights. In most
1384 cases the role used for backups should probably be limited to specific
1385 buckets.
1386
1387 The boto3 backend only supports newer domain style buckets. Amazon is
1388 moving to deprecate the older bucket style, so migration is
1389 recommended. Use the older s3 backend for compatibility with backups
1390 stored in buckets using older naming conventions.
1391
1392 The boto3 backend does not currently support initiating restores from
1393 the glacier storage class. When restoring a backup from glacier or
1394 glacier deep archive, the backup files must first be restored out of
1395 band. There are multiple options when restoring backups from cold
1396 storage, which vary in both cost and speed. See Amazon's documentation
1397 for details.
1398
1399
1401 The Azure backend requires the Microsoft Azure Storage SDK for Python
1402 to be installed on the system. See REQUIREMENTS above.
1403
1404 It uses environment variables for authentification: AZURE_ACCOUNT_NAME
1405 (required), AZURE_ACCOUNT_KEY (optional), AZURE_SHARED_ACCESS_SIGNATURE
1406 (optional). One of AZURE_ACCOUNT_KEY or AZURE_SHARED_ACCESS_SIGNATURE
1407 is required.
1408
1409 A container name must be a valid DNS name, conforming to the following
1410 naming rules:
1411
1412
1413 1. Container names must start with a letter or number, and
1414 can contain only letters, numbers, and the dash (-)
1415 character.
1416
1417 2. Every dash (-) character must be immediately preceded and
1418 followed by a letter or number; consecutive dashes are
1419 not permitted in container names.
1420
1421 3. All letters in a container name must be lowercase.
1422
1423 4. Container names must be from 3 through 63 characters
1424 long.
1425
1426
1428 Pyrax is Rackspace's next-generation Cloud management API, including
1429 Cloud Files access. The cfpyrax backend requires the pyrax library to
1430 be installed on the system. See REQUIREMENTS above.
1431
1432 Cloudfiles is Rackspace's now deprecated implementation of OpenStack
1433 Object Storage protocol. Users wishing to use Duplicity with Rackspace
1434 Cloud Files should migrate to the new Pyrax plugin to ensure support.
1435
1436 The backend requires python-cloudfiles to be installed on the system.
1437 See REQUIREMENTS above.
1438
1439 It uses three environment variables for authentification:
1440 CLOUDFILES_USERNAME (required), CLOUDFILES_APIKEY (required),
1441 CLOUDFILES_AUTHURL (optional)
1442
1443 If CLOUDFILES_AUTHURL is unspecified it will default to the value
1444 provided by python-cloudfiles, which points to rackspace, hence this
1445 value must be set in order to use other cloud files providers.
1446
1447
1449 1. First of all Dropbox backend requires valid authentication
1450 token. It should be passed via DPBX_ACCESS_TOKEN environment
1451 variable.
1452 To obtain it please create 'Dropbox API' application at:
1453 https://www.dropbox.com/developers/apps/create
1454 Then visit app settings and just use 'Generated access token'
1455 under OAuth2 section.
1456 Alternatively you can let duplicity generate access token
1457 itself. In such case temporary export DPBX_APP_KEY ,
1458 DPBX_APP_SECRET using values from app settings page and run
1459 duplicity interactively.
1460 It will print the URL that you need to open in the browser to
1461 obtain OAuth2 token for the application. Just follow on-screen
1462 instructions and then put generated token to DPBX_ACCESS_TOKEN
1463 variable. Once done, feel free to unset DPBX_APP_KEY and
1464 DPBX_APP_SECRET
1465
1466
1467 2. "some_dir" must already exist in the Dropbox folder. Depending
1468 on access token kind it may be:
1469 Full Dropbox: path is absolute and starts from 'Dropbox'
1470 root folder.
1471 App Folder: path is related to application folder.
1472 Dropbox client will show it in ~/Dropbox/Apps/<app-name>
1473
1474
1475 3. When using Dropbox for storage, be aware that all files,
1476 including the ones in the Apps folder, will be synced to all
1477 connected computers. You may prefer to use a separate Dropbox
1478 account specially for the backups, and not connect any computers
1479 to that account. Alternatively you can configure selective sync
1480 on all computers to avoid syncing of backup files
1481
1482
1484 Amazon S3 provides the ability to choose the location of a bucket upon
1485 its creation. The purpose is to enable the user to choose a location
1486 which is better located network topologically relative to the user,
1487 because it may allow for faster data transfers.
1488
1489 duplicity will create a new bucket the first time a bucket access is
1490 attempted. At this point, the bucket will be created in Europe if
1491 --s3-european-buckets was given. For reasons having to do with how the
1492 Amazon S3 service works, this also requires the use of the --s3-use-
1493 new-style option. This option turns on subdomain based bucket
1494 addressing in S3. The details are beyond the scope of this man page,
1495 but it is important to know that your bucket must not contain upper
1496 case letters or any other characters that are not valid parts of a
1497 hostname. Consequently, for reasons of backwards compatibility, use of
1498 subdomain based bucket addressing is not enabled by default.
1499
1500 Note that you will need to use --s3-use-new-style for all operations on
1501 European buckets; not just upon initial creation.
1502
1503 You only need to use --s3-european-buckets upon initial creation, but
1504 you may may use it at all times for consistency.
1505
1506 Further note that when creating a new European bucket, it can take a
1507 while before the bucket is fully accessible. At the time of this
1508 writing it is unclear to what extent this is an expected feature of
1509 Amazon S3, but in practice you may experience timeouts, socket errors
1510 or HTTP errors when trying to upload files to your newly created
1511 bucket. Give it a few minutes and the bucket should function normally.
1512
1513
1515 Filename prefixes can be used in multi backend with mirror mode to
1516 define affinity rules. They can also be used in conjunction with S3
1517 lifecycle rules to transition archive files to Glacier, while keeping
1518 metadata (signature and manifest files) on S3.
1519
1520 Duplicity does not require access to archive files except when
1521 restoring from backup.
1522
1523
1525 Support for Google Cloud Storage relies on its Interoperable Access,
1526 which must be enabled for your account. Once enabled, you can generate
1527 Interoperable Storage Access Keys and pass them to duplicity via the
1528 GS_ACCESS_KEY_ID and GS_SECRET_ACCESS_KEY environment variables.
1529 Alternatively, you can run gsutil config -a to have the Google Cloud
1530 Storage utility populate the ~/.boto configuration file.
1531
1532 Enable Interoperable Access:
1533 https://code.google.com/apis/console#:storage
1534 Create Access Keys:
1535 https://code.google.com/apis/console#:storage:legacy
1536
1537
1539 The hubic backend requires the pyrax library to be installed on the
1540 system. See REQUIREMENTS above. You will need to set your credentials
1541 for hubiC in a file called ~/.hubic_credentials, following this
1542 pattern:
1543
1544 [hubic]
1545 email = your_email
1546 password = your_password
1547 client_id = api_client_id
1548 client_secret = api_secret_key
1549 redirect_uri = http://localhost/
1550
1551
1553 An IMAP account can be used as a target for the upload. The userid may
1554 be specified and the password will be requested.
1555
1556 The from_address_prefix may be specified (and probably should be). The
1557 text will be used as the "From" address in the IMAP server. Then on a
1558 restore (or list) command the from_address_prefix will distinguish
1559 between different backups.
1560
1561
1563 The multi backend allows duplicity to combine the storage available in
1564 more than one backend store (e.g., you can store across a google drive
1565 account and a onedrive account to get effectively the combined storage
1566 available in both). The URL path specifies a JSON formated config file
1567 containing a list of the backends it will use. The URL may also specify
1568 "query" parameters to configure overall behavior. Each element of the
1569 list must have a "url" element, and may also contain an optional
1570 "description" and an optional "env" list of environment variables used
1571 to configure that backend.
1572
1573 Query Parameters
1574 Query parameters come after the file URL in standard HTTP format for
1575 example:
1576 multi:///path/to/config.json?mode=mirror&onfail=abort
1577 multi:///path/to/config.json?mode=stripe&onfail=continue
1578 multi:///path/to/config.json?onfail=abort&mode=stripe
1579 multi:///path/to/config.json?onfail=abort
1580 Order does not matter, however unrecognized parameters are considered
1581 an error.
1582
1583 mode=stripe
1584 This mode (the default) performs round-robin access to the list
1585 of backends. In this mode, all backends must be reliable as a
1586 loss of one means a loss of one of the archive files.
1587
1588 mode=mirror
1589 This mode accesses backends as a RAID1-store, storing every file
1590 in every backend and reading files from the first-successful
1591 backend. A loss of any backend should result in no failure.
1592 Note that backends added later will only get new files and may
1593 require a manual sync with one of the other operating ones.
1594
1595 onfail=continue
1596 This setting (the default) continues all write operations in as
1597 best-effort. Any failure results in the next backend tried.
1598 Failure is reported only when all backends fail a given
1599 operation with the error result from the last failure.
1600
1601 onfail=abort
1602 This setting considers any backend write failure as a
1603 terminating condition and reports the error. Data reading and
1604 listing operations are independent of this and will try with the
1605 next backend on failure.
1606
1607 JSON File Example
1608 [
1609 {
1610 "description": "a comment about the backend"
1611 "url": "abackend://myuser@domain.com/backup",
1612 "env": [
1613 {
1614 "name" : "MYENV",
1615 "value" : "xyz"
1616 },
1617 {
1618 "name" : "FOO",
1619 "value" : "bar"
1620 }
1621 ],
1622 "prefixes": ["prefix1_", "prefix2_"]
1623 },
1624 {
1625 "url": "file:///path/to/dir"
1626 }
1627 ]
1628
1629
1631 Par2 Wrapper Backend can be used in combination with all other backends
1632 to create recovery files. Just add par2+ before a regular scheme (e.g.
1633 par2+ftp://user@host/dir or par2+s3+http://bucket_name ). This will
1634 create par2 recovery files for each archive and upload them all to the
1635 wrapped backend.
1636
1637 Before restoring, archives will be verified. Corrupt archives will be
1638 repaired on the fly if there are enough recovery blocks available.
1639
1640 Use --par2-redundancy percent to adjust the size (and redundancy) of
1641 recovery files in percent.
1642
1643
1645 The pydrive backend requires Python PyDrive package to be installed on
1646 the system. See REQUIREMENTS above.
1647
1648 There are two ways to use PyDrive: with a regular account or with a
1649 "service account". With a service account, a separate account is
1650 created, that is only accessible with Google APIs and not a web login.
1651 With a regular account, you can store backups in your normal Google
1652 Drive.
1653
1654 To use a service account, go to the Google developers console at
1655 https://console.developers.google.com. Create a project, and make sure
1656 Drive API is enabled for the project. Under "APIs and auth", click
1657 Create New Client ID, then select Service Account with P12 key.
1658
1659 Download the .p12 key file of the account and convert it to the .pem
1660 format:
1661 openssl pkcs12 -in XXX.p12 -nodes -nocerts > pydriveprivatekey.pem
1662
1663 The content of .pem file should be passed to GOOGLE_DRIVE_ACCOUNT_KEY
1664 environment variable for authentification.
1665
1666 The email address of the account will be used as part of URL. See URL
1667 FORMAT above.
1668
1669 The alternative is to use a regular account. To do this, start as
1670 above, but when creating a new Client ID, select "Installed
1671 application" of type "Other". Create a file with the following content,
1672 and pass its filename in the GOOGLE_DRIVE_SETTINGS environment
1673 variable:
1674
1675 client_config_backend: settings
1676 client_config:
1677 client_id: <Client ID from developers' console>
1678 client_secret: <Client secret from developers' console>
1679 save_credentials: True
1680 save_credentials_backend: file
1681 save_credentials_file: <filename to cache credentials>
1682 get_refresh_token: True
1683
1684 In this scenario, the username and host parts of the URL play no role;
1685 only the path matters. During the first run, you will be prompted to
1686 visit an URL in your browser to grant access to your drive. Once
1687 granted, you will receive a verification code to paste back into
1688 Duplicity. The credentials are then cached in the file references above
1689 for future use.
1690
1691
1693 Rclone is a powerful command line program to sync files and directories
1694 to and from various cloud storage providers.
1695
1696 Once you have configured an rclone remote via
1697
1698 rclone config
1699
1700 and successfully set up a remote (e.g. gdrive for Google Drive),
1701 assuming you can list your remote files with
1702
1703 rclone ls gdrive:mydocuments
1704
1705 you can start your backup with
1706
1707 duplicity /mydocuments rclone://gdrive:/mydocuments
1708
1709 Please note the slash after the second colon. Some storage provider
1710 will work with or without slash after colon, but some other will not.
1711 Since duplicity will complain about malformed URL if a slash is not
1712 present, always put it after the colon, and the backend will handle it
1713 for you.
1714
1715
1717 The ssh backends support sftp and scp/ssh transport protocols. This is
1718 a known user-confusing issue as these are fundamentally different. If
1719 you plan to access your backend via one of those please inform yourself
1720 about the requirements for a server to support sftp or scp/ssh access.
1721 To make it even more confusing the user can choose between several ssh
1722 backends via a scheme prefix: paramiko+ (default), pexpect+, lftp+... .
1723 paramiko & pexpect support --use-scp, --ssh-askpass and --ssh-options.
1724 Only the pexpect backend allows to define --scp-command and --sftp-
1725 command.
1726
1727 SSH paramiko backend (default) is a complete reimplementation of ssh
1728 protocols natively in python. Advantages are speed and maintainability.
1729 Minor disadvantage is that extra packages are needed as listed in
1730 REQUIREMENTS above. In sftp (default) mode all operations are done via
1731 the according sftp commands. In scp mode ( --use-scp ) though scp
1732 access is used for put/get operations but listing is done via ssh
1733 remote shell.
1734
1735 SSH pexpect backend is the legacy ssh backend using the command line
1736 ssh binaries via pexpect. Older versions used scp for get and put
1737 operations and sftp for list and delete operations. The current
1738 version uses sftp for all four supported operations, unless the --use-
1739 scp option is used to revert to old behavior.
1740
1741 SSH lftp backend is simply there because lftp can interact with the ssh
1742 cmd line binaries. It is meant as a last resort in case the above
1743 options fail for some reason.
1744
1745 Why use sftp instead of scp? The change to sftp was made in order to
1746 allow the remote system to chroot the backup, thus providing better
1747 security and because it does not suffer from shell quoting issues like
1748 scp. Scp also does not support any kind of file listing, so sftp or
1749 ssh access will always be needed in addition for this backend mode to
1750 work properly. Sftp does not have these limitations but needs an sftp
1751 service running on the backend server, which is sometimes not an
1752 option.
1753
1754
1756 Certificate verification as implemented right now [02.2016] only in the
1757 webdav and lftp backends. older pythons 2.7.8- and older lftp binaries
1758 need a file based database of certification authority certificates
1759 (cacert file).
1760 Newer python 2.7.9+ and recent lftp versions however support the system
1761 default certificates (usually in /etc/ssl/certs) and also giving an
1762 alternative ca cert folder via --ssl-cacert-path.
1763
1764 The cacert file has to be a PEM formatted text file as currently
1765 provided by the CURL project. See
1766
1767 http://curl.haxx.se/docs/caextract.html
1768
1769 After creating/retrieving a valid cacert file you should copy it to
1770 either
1771
1772 ~/.duplicity/cacert.pem
1773 ~/duplicity_cacert.pem
1774 /etc/duplicity/cacert.pem
1775
1776 Duplicity searches it there in the same order and will fail if it can't
1777 find it. You can however specify the option --ssl-cacert-file <file>
1778 to point duplicity to a copy in a different location.
1779
1780 Finally there is the --ssl-no-check-certificate option to disable
1781 certificate verification alltogether, in case some ssl library is
1782 missing or verification is not wanted. Use it with care, as even with
1783 self signed servers manually providing the private ca certificate is
1784 definitely the safer option.
1785
1786
1788 Swift is the OpenStack Object Storage service.
1789 The backend requires python-switclient to be installed on the system.
1790 python-keystoneclient is also needed to use OpenStack's Keystone
1791 Identity service. See REQUIREMENTS above.
1792
1793 It uses following environment variables for authentification:
1794 SWIFT_USERNAME (required), SWIFT_PASSWORD (required), SWIFT_AUTHURL
1795 (required), SWIFT_USERID (required, only for IBM Bluemix
1796 ObjectStorage), SWIFT_TENANTID (required, only for IBM Bluemix
1797 ObjectStorage), SWIFT_REGIONNAME (required, only for IBM Bluemix
1798 ObjectStorage), SWIFT_TENANTNAME (optional, the tenant can be included
1799 in the username)
1800
1801 If the user was previously authenticated, the following environment
1802 variables can be used instead: SWIFT_PREAUTHURL (required),
1803 SWIFT_PREAUTHTOKEN (required)
1804
1805 If SWIFT_AUTHVERSION is unspecified, it will default to version 1.
1806
1807
1809 PCA is a long-term data archival solution by OVH. It runs a slightly
1810 modified version of Openstack Swift introducing latency in the data
1811 retrieval process. It is a good pick for a multi backend configuration
1812 where receiving volumes while an other backend is used to store
1813 manifests and signatures.
1814
1815 The backend requires python-switclient to be installed on the system.
1816 python-keystoneclient is also needed to interact with OpenStack's
1817 Keystone Identity service. See REQUIREMENTS above.
1818
1819 It uses following environment variables for authentification:
1820 PCA_USERNAME (required), PCA_PASSWORD (required), PCA_AUTHURL
1821 (required), PCA_USERID (optional), PCA_TENANTID (optional, but either
1822 the tenant name or tenant id must be supplied) PCA_REGIONNAME
1823 (optional), PCA_TENANTNAME (optional, but either the tenant name or
1824 tenant id must be supplied)
1825
1826 If the user was previously authenticated, the following environment
1827 variables can be used instead: PCA_PREAUTHURL (required),
1828 PCA_PREAUTHTOKEN (required)
1829
1830 If PCA_AUTHVERSION is unspecified, it will default to version 2.
1831
1832
1834 This backend requires mediafire python library to be installed on the
1835 system. See REQUIREMENTS.
1836
1837 Use URL escaping for username (and password, if provided via command
1838 line):
1839
1840
1841 mf://duplicity%40example.com@mediafire.com/some_folder
1842
1843 The destination folder will be created for you if it does not exist.
1844
1845
1847 Signing and symmetrically encrypt at the same time with the gpg binary
1848 on the command line, as used within duplicity, is a specifically
1849 challenging issue. Tests showed that the following combinations proved
1850 working.
1851
1852 1. Setup gpg-agent properly. Use the option --use-agent and enter both
1853 passphrases (symmetric and sign key) in the gpg-agent's dialog.
1854
1855 2. Use a PASSPHRASE for symmetric encryption of your choice but the
1856 signing key has an empty passphrase.
1857
1858 3. The used PASSPHRASE for symmetric encryption and the passphrase of
1859 the signing key are identical.
1860
1861
1863 Hard links currently unsupported (they will be treated as non-linked
1864 regular files).
1865
1866 Bad signatures will be treated as empty instead of logging appropriate
1867 error message.
1868
1869
1871 This section describes duplicity's basic operation and the format of
1872 its data files. It should not necessary to read this section to use
1873 duplicity.
1874
1875 The files used by duplicity to store backup data are tarfiles in GNU
1876 tar format. They can be produced independently by rdiffdir(1). For
1877 incremental backups, new files are saved normally in the tarfile. But
1878 when a file changes, instead of storing a complete copy of the file,
1879 only a diff is stored, as generated by rdiff(1). If a file is deleted,
1880 a 0 length file is stored in the tar. It is possible to restore a
1881 duplicity archive "manually" by using tar and then cp, rdiff, and rm as
1882 necessary. These duplicity archives have the extension difftar.
1883
1884 Both full and incremental backup sets have the same format. In effect,
1885 a full backup set is an incremental one generated from an empty
1886 signature (see below). The files in full backup sets will start with
1887 duplicity-full while the incremental sets start with duplicity-inc.
1888 When restoring, duplicity applies patches in order, so deleting, for
1889 instance, a full backup set may make related incremental backup sets
1890 unusable.
1891
1892 In order to determine which files have been deleted, and to calculate
1893 diffs for changed files, duplicity needs to process information about
1894 previous sessions. It stores this information in the form of tarfiles
1895 where each entry's data contains the signature (as produced by rdiff)
1896 of the file instead of the file's contents. These signature sets have
1897 the extension sigtar.
1898
1899 Signature files are not required to restore a backup set, but without
1900 an up-to-date signature, duplicity cannot append an incremental backup
1901 to an existing archive.
1902
1903 To save bandwidth, duplicity generates full signature sets and
1904 incremental signature sets. A full signature set is generated for each
1905 full backup, and an incremental one for each incremental backup. These
1906 start with duplicity-full-signatures and duplicity-new-signatures
1907 respectively. These signatures will be stored both locally and
1908 remotely. The remote signatures will be encrypted if encryption is
1909 enabled. The local signatures will not be encrypted and stored in the
1910 archive dir (see --archive-dir ).
1911
1912
1914 Duplicity requires a POSIX-like operating system with a python
1915 interpreter version 2.6+ installed. It is best used under GNU/Linux.
1916
1917 Some backends also require additional components (probably available as
1918 packages for your specific platform):
1919
1920 Amazon Drive backend
1921 python-requests - http://python-requests.org
1922 python-requests-oauthlib - https://github.com/requests/requests-
1923 oauthlib
1924
1925 azure backend (Azure Blob Storage Service)
1926 Microsoft Azure Storage SDK for Python -
1927 https://pypi.python.org/pypi/azure-storage/
1928
1929 boto backend (S3 Amazon Web Services, Google Cloud Storage)
1930 boto version 2.0+ - http://github.com/boto/boto
1931
1932 cfpyrax backend (Rackspace Cloud) and hubic backend (hubic.com)
1933 Rackspace CloudFiles Pyrax API -
1934 http://docs.rackspace.com/sdks/guide/content/python.html
1935
1936 dpbx backend (Dropbox)
1937 Dropbox Python SDK -
1938 https://www.dropbox.com/developers/reference/sdk
1939
1940 gdocs gdata backend (legacy Google Docs backend)
1941 Google Data APIs Python Client Library -
1942 http://code.google.com/p/gdata-python-client/
1943
1944 gdocs pydrive backend(default)
1945 see pydrive backend
1946
1947 gio backend (Gnome VFS API)
1948 PyGObject - http://live.gnome.org/PyGObject
1949 D-Bus (dbus)- http://www.freedesktop.org/wiki/Software/dbus
1950
1951 lftp backend (needed for ftp, ftps, fish [over ssh] - also supports
1952 sftp, webdav[s])
1953 LFTP Client - http://lftp.yar.ru/
1954
1955 mega backend (mega.co.nz)
1956 megatools client - https://github.com/megous/megatools
1957
1958 multi backend
1959 Multi -- store to more than one backend
1960 (also see A NOTE ON MULTI BACKEND ) below.
1961
1962 ncftp backend (ftp, select via ncftp+ftp://)
1963 NcFTP - http://www.ncftp.com/
1964
1965 OneDrive backend (Microsoft OneDrive)
1966 python-requests - http://python-requests.org
1967 python-requests-oauthlib - https://github.com/requests/requests-
1968 oauthlib
1969
1970 Par2 Wrapper Backend
1971 par2cmdline - http://parchive.sourceforge.net/
1972
1973 pydrive backend
1974 PyDrive -- a wrapper library of google-api-python-client -
1975 https://pypi.python.org/pypi/PyDrive
1976 (also see A NOTE ON PYDRIVE BACKEND ) below.
1977
1978 rclone backend
1979 rclone - https://rclone.org/
1980
1981 rsync backend
1982 rsync client binary - http://rsync.samba.org/
1983
1984 ssh paramiko backend (default)
1985 paramiko (SSH2 for python) -
1986 http://pypi.python.org/pypi/paramiko (downloads);
1987 http://github.com/paramiko/paramiko (project page)
1988 pycrypto (Python Cryptography Toolkit) -
1989 http://www.dlitz.net/software/pycrypto/
1990
1991 ssh pexpect backend
1992 sftp/scp client binaries OpenSSH - http://www.openssh.com/
1993 Python pexpect module -
1994 http://pexpect.sourceforge.net/pexpect.html
1995
1996 swift backend (OpenStack Object Storage)
1997 Python swiftclient module - https://github.com/openstack/python-
1998 swiftclient/
1999 Python keystoneclient module -
2000 https://github.com/openstack/python-keystoneclient/
2001
2002 webdav backend
2003 certificate authority database file for ssl certificate
2004 verification of HTTPS connections -
2005 http://curl.haxx.se/docs/caextract.html
2006 (also see A NOTE ON SSL CERTIFICATE VERIFICATION).
2007 Python kerberos module for kerberos authentication -
2008 https://github.com/02strich/pykerberos
2009
2010 MediaFire backend
2011 MediaFire Python Open SDK -
2012 https://pypi.python.org/pypi/mediafire/
2013
2014
2016 Original Author - Ben Escoto <bescoto@stanford.edu>
2017
2018 Current Maintainer - Kenneth Loafman <kenneth@loafman.com>
2019
2020 Continuous Contributors
2021 Edgar Soldin, Mike Terry
2022
2023 Most backends were contributed individually. Information about their
2024 authorship may be found in the according file's header.
2025
2026 Also we'd like to thank everybody posting issues to the mailing list or
2027 on launchpad, sending in patches or contributing otherwise. Duplicity
2028 wouldn't be as stable and useful if it weren't for you.
2029
2030 A special thanks goes to rsync.net, a Cloud Storage provider with
2031 explicit support for duplicity, for several monetary donations and for
2032 providing a special "duplicity friends" rate for their offsite backup
2033 service. Email info@rsync.net for details.
2034
2035
2037 rdiffdir(1), python(1), rdiff(1), rdiff-backup(1).
2038
2039
2040
2041Version 0.8.12.1612 March 19, 2020 DUPLICITY(1)