1DUPLICITY(1) User Manuals DUPLICITY(1)
2
3
4
6 duplicity - Encrypted incremental backup to local or remote storage.
7
8
10 For detailed descriptions for each command see chapter ACTIONS.
11
12 duplicity [full|incremental] [options] source_directory target_url
13
14 duplicity verify [options] [--compare-data] [--file-to-restore
15 <relpath>] [--time time] source_url target_directory
16
17 duplicity collection-status [options] [--file-changed <relpath>]
18 [--show-changes-in-set <index>] target_url
19
20 duplicity list-current-files [options] [--time time] target_url
21
22 duplicity [restore] [options] [--file-to-restore <relpath>] [--time
23 time] source_url target_directory
24
25 duplicity remove-older-than <time> [options] [--force] target_url
26
27 duplicity remove-all-but-n-full <count> [options] [--force] target_url
28
29 duplicity remove-all-inc-of-but-n-full <count> [options] [--force]
30 target_url
31
32 duplicity cleanup [options] [--force] target_url
33
34 duplicity replicate [options] [--time time] source_url target_url
35
36
38 Duplicity incrementally backs up files and folders into tar-format
39 volumes encrypted with GnuPG and places them to a remote (or local)
40 storage backend. See chapter URL FORMAT for a list of all supported
41 backends and how to address them. Because duplicity uses librsync,
42 incremental backups are space efficient and only record the parts of
43 files that have changed since the last backup. Currently duplicity
44 supports deleted files, full Unix permissions, uid/gid, directories,
45 symbolic links, fifos, etc., but not hard links.
46
47 If you are backing up the root directory /, remember to --exclude
48 /proc, or else duplicity will probably crash on the weird stuff in
49 there.
50
51
53 Here is an example of a backup, using sftp to back up /home/me to
54 some_dir on the other.host machine:
55
56 duplicity /home/me sftp://uid@other.host/some_dir
57
58 If the above is run repeatedly, the first will be a full backup, and
59 subsequent ones will be incremental. To force a full backup, use the
60 full action:
61
62 duplicity full /home/me sftp://uid@other.host/some_dir
63
64 or enforcing a full every other time via --full-if-older-than <time> ,
65 e.g. a full every month:
66
67 duplicity --full-if-older-than 1M /home/me
68 sftp://uid@other.host/some_dir
69
70 Now suppose we accidentally delete /home/me and want to restore it the
71 way it was at the time of last backup:
72
73 duplicity sftp://uid@other.host/some_dir /home/me
74
75 Duplicity enters restore mode because the URL comes before the local
76 directory. If we wanted to restore just the file "Mail/article" in
77 /home/me as it was three days ago into /home/me/restored_file:
78
79 duplicity -t 3D --file-to-restore Mail/article
80 sftp://uid@other.host/some_dir /home/me/restored_file
81
82 The following command compares the latest backup with the current
83 files:
84
85 duplicity verify sftp://uid@other.host/some_dir /home/me
86
87 Finally, duplicity recognizes several include/exclude options. For
88 instance, the following will backup the root directory, but exclude
89 /mnt, /tmp, and /proc:
90
91 duplicity --exclude /mnt --exclude /tmp --exclude /proc /
92 file:///usr/local/backup
93
94 Note that in this case the destination is the local directory
95 /usr/local/backup. The following will backup only the /home and /etc
96 directories under root:
97
98 duplicity --include /home --include /etc --exclude '**' /
99 file:///usr/local/backup
100
101 Duplicity can also access a repository via ftp. If a user name is
102 given, the environment variable FTP_PASSWORD is read to determine the
103 password:
104
105 FTP_PASSWORD=mypassword duplicity /local/dir
106 ftp://user@other.host/some_dir
107
108
110 Duplicity knows action commands, which can be finetuned with options.
111 The actions for backup (full,incr) and restoration (restore) can as
112 well be left out as duplicity detects in what mode it should switch to
113 by the order of target URL and local folder. If the target URL comes
114 before the local folder a restore is in order, is the local folder
115 before target URL then this folder is about to be backed up to the
116 target URL.
117 If a backup is in order and old signatures can be found duplicity
118 automatically performs an incremental backup.
119
120 NOTE: The following explanations explain some but not all options that
121 can be used in connection with that action command. Consult the
122 OPTIONS section for more detailed informations.
123
124
125 full <folder> <url>
126 Perform a full backup. A new backup chain is started even if
127 signatures are available for an incremental backup.
128
129
130 incr <folder> <url>
131 If this is requested an incremental backup will be performed.
132 Duplicity will abort if no old signatures can be found.
133
134
135 verify [--compare-data] [--time <time>] [--file-to-restore <rel_path>]
136 <url> <local_path>
137 Verify tests the integrity of the backup archives at the remote
138 location by downloading each file and checking both that it can
139 restore the archive and that the restored file matches the
140 signature of that file stored in the backup, i.e. compares the
141 archived file with its hash value from archival time. Verify
142 does not actually restore and will not overwrite any local
143 files. Duplicity will exit with a non-zero error level if any
144 files do not match the signature stored in the archive for that
145 file. On verbosity level 4 or higher, it will log a message for
146 each file that differs from the stored signature. Files must be
147 downloaded to the local machine in order to compare them.
148 Verify does not compare the backed-up version of the file to the
149 current local copy of the files unless the --compare-data option
150 is used (see below).
151 The --file-to-restore option restricts verify to that file or
152 folder. The --time option allows to select a backup to verify.
153 The --compare-data option enables data comparison (see below).
154
155
156 collection-status [--file-changed <relpath>] [--show-changes-in-set
157 <index>] <url>
158 Summarize the status of the backup repository by printing the
159 chains and sets found, and the number of volumes in each.
160 The --file-changed option summarizes the changes to the file (in
161 the most recent backup chain). The --show-changes-in-set option
162 summarizes all the file changes in the index:th backup set
163 (where index 0 means the latest set, 1 means the next to latest,
164 etc.).
165
166
167 list-current-files [--time <time>] <url>
168 Lists the files contained in the most current backup or backup
169 at time. The information will be extracted from the signature
170 files, not the archive data itself. Thus the whole archive does
171 not have to be downloaded, but on the other hand if the archive
172 has been deleted or corrupted, this command will not detect it.
173
174
175 restore [--file-to-restore <relpath>] [--time <time>] <url>
176 <target_folder>
177 You can restore the full monty or selected folders/files from a
178 specific time. Use the relative path as it is printed by list-
179 current-files. Usually not needed as duplicity enters restore
180 mode when it detects that the URL comes before the local folder.
181
182
183 remove-older-than <time> [--force] <url>
184 Delete all backup sets older than the given time. Old backup
185 sets will not be deleted if backup sets newer than time depend
186 on them. See the TIME FORMATS section for more information.
187 Note, this action cannot be combined with backup or other
188 actions, such as cleanup. Note also that --force will be needed
189 to delete the files instead of just listing them.
190
191
192 remove-all-but-n-full <count> [--force] <url>
193 Delete all backups sets that are older than the count:th last
194 full backup (in other words, keep the last count full backups
195 and associated incremental sets). count must be larger than
196 zero. A value of 1 means that only the single most recent backup
197 chain will be kept. Note that --force will be needed to delete
198 the files instead of just listing them.
199
200
201 remove-all-inc-of-but-n-full <count> [--force] <url>
202 Delete incremental sets of all backups sets that are older than
203 the count:th last full backup (in other words, keep only old
204 full backups and not their increments). count must be larger
205 than zero. A value of 1 means that only the single most recent
206 backup chain will be kept intact. Note that --force will be
207 needed to delete the files instead of just listing them.
208
209
210 cleanup [--force] <url>
211 Delete the extraneous duplicity files on the given backend.
212 Non-duplicity files, or files in complete data sets will not be
213 deleted. This should only be necessary after a duplicity
214 session fails or is aborted prematurely. Note that --force will
215 be needed to delete the files instead of just listing them.
216
217
218 replicate [--time time] <source_url> <target_url>
219 Replicate backup sets from source to target backend. Files will
220 be (re)-encrypted and (re)-compressed depending on normal
221 backend options. Signatures and volumes will not get recomputed,
222 thus options like --volsize or --max-blocksize have no effect.
223 When --time time is given, only backup sets older than time will
224 be replicated.
225
226
228 --allow-source-mismatch
229 Do not abort on attempts to use the same archive dir or remote
230 backend to back up different directories. duplicity will tell
231 you if you need this switch.
232
233
234 --archive-dir path
235 The archive directory.
236
237 NOTE: This option changed in 0.6.0. The archive directory is
238 now necessary in order to manage persistence for current and
239 future enhancements. As such, this option is now used only to
240 change the location of the archive directory. The archive
241 directory should not be deleted, or duplicity will have to
242 recreate it from the remote repository (which may require
243 decrypting the backup contents).
244
245 When backing up or restoring, this option specifies that the
246 local archive directory is to be created in path. If the
247 archive directory is not specified, the default will be to
248 create the archive directory in ~/.cache/duplicity/.
249
250 The archive directory can be shared between backups to multiple
251 targets, because a subdirectory of the archive dir is used for
252 individual backups (see --name ).
253
254 The combination of archive directory and backup name must be
255 unique in order to separate the data of different backups.
256
257 The interaction between the --archive-dir and the --name options
258 allows for four possible combinations for the location of the
259 archive dir:
260
261
262 1. neither specified (default)
263 ~/.cache/duplicity/hash-of-url
264
265 2. --archive-dir=/arch, no --name
266 /arch/hash-of-url
267
268 3. no --archive-dir, --name=foo
269 ~/.cache/duplicity/foo
270
271 4. --archive-dir=/arch, --name=foo
272 /arch/foo
273
274
275 --asynchronous-upload
276 (EXPERIMENTAL) Perform file uploads asynchronously in the
277 background, with respect to volume creation. This means that
278 duplicity can upload a volume while, at the same time, preparing
279 the next volume for upload. The intended end-result is a faster
280 backup, because the local CPU and your bandwidth can be more
281 consistently utilized. Use of this option implies additional
282 need for disk space in the temporary storage location; rather
283 than needing to store only one volume at a time, enough storage
284 space is required to store two volumes.
285
286
287 --azure-blob-tier
288 Standard storage tier used for backup files (Hot|Cool|Archive).
289
290
291 --azure-max-single-put-size
292 Specify the number of the largest supported upload size where
293 the Azure library makes only one put call. If the content size
294 is known and below this value the Azure library will only
295 perform one put request to upload one block. The number is
296 expected to be in bytes.
297
298
299 --azure-max-block-size
300 Specify the number for the block size used by the Azure library
301 to upload blobs if it is split into multiple blocks. The
302 maximum block size the service supports is 104857600 (100MiB)
303 and the default is 4194304 (4MiB)
304
305
306 --azure-max-connections
307 Specify the number of maximum connections to transfer one blob
308 to Azure blob size exceeds 64MB. The default values is 2.
309
310
311 --backend-retry-delay number
312 Specifies the number of seconds that duplicity waits after an
313 error has occured before attempting to repeat the operation.
314
315
316 --cf-backend backend
317 Allows the explicit selection of a cloudfiles backend. Defaults
318 to pyrax. Alternatively you might choose cloudfiles.
319
320
321 --b2-hide-files
322 Causes Duplicity to hide files in B2 instead of deleting them.
323 Useful in combination with B2's lifecycle rules.
324
325
326 --compare-data
327 Enable data comparison of regular files on action verify. This
328 conducts a verify as described above to verify the integrity of
329 the backup archives, but additionally compares restored files to
330 those in target_directory. Duplicity will not replace any files
331 in target_directory. Duplicity will exit with a non-zero error
332 level if the files do not correctly verify or if any files from
333 the archive differ from those in target_directory. On verbosity
334 level 4 or higher, it will log a message for each file that
335 differs from its equivalent in target_directory.
336
337
338 --copy-links
339 Resolve symlinks during backup. Enabling this will resolve &
340 back up the symlink's file/folder data instead of the symlink
341 itself, potentially increasing the size of the backup.
342
343
344 --dry-run
345 Calculate what would be done, but do not perform any backend
346 actions
347
348
349 --encrypt-key key-id
350 When backing up, encrypt to the given public key, instead of
351 using symmetric (traditional) encryption. Can be specified
352 multiple times. The key-id can be given in any of the formats
353 supported by GnuPG; see gpg(1), section "HOW TO SPECIFY A USER
354 ID" for details.
355
356
357 --encrypt-secret-keyring filename
358 This option can only be used with --encrypt-key, and changes the
359 path to the secret keyring for the encrypt key to filename This
360 keyring is not used when creating a backup. If not specified,
361 the default secret keyring is used which is usually located at
362 .gnupg/secring.gpg
363
364
365 --encrypt-sign-key key-id
366 Convenience parameter. Same as --encrypt-key key-id --sign-key
367 key-id.
368
369
370 --exclude shell_pattern
371 Exclude the file or files matched by shell_pattern. If a
372 directory is matched, then files under that directory will also
373 be matched. See the FILE SELECTION section for more
374 information.
375
376
377 --exclude-device-files
378 Exclude all device files. This can be useful for
379 security/permissions reasons or if duplicity is not handling
380 device files correctly.
381
382
383 --exclude-filelist filename
384 Excludes the files listed in filename, with each line of the
385 filelist interpreted according to the same rules as --include
386 and --exclude. See the FILE SELECTION section for more
387 information.
388
389
390 --exclude-if-present filename
391 Exclude directories if filename is present. Allows the user to
392 specify folders that they do not wish to backup by adding a
393 specified file (e.g. ".nobackup") instead of maintaining a
394 comprehensive exclude/include list.
395
396
397 --exclude-older-than time
398 Exclude any files whose modification date is earlier than the
399 specified time. This can be used to produce a partial backup
400 that contains only recently changed files. See the TIME FORMATS
401 section for more information.
402
403
404 --exclude-other-filesystems
405 Exclude files on file systems (identified by device number)
406 other than the file system the root of the source directory is
407 on.
408
409
410 --exclude-regexp regexp
411 Exclude files matching the given regexp. Unlike the --exclude
412 option, this option does not match files in a directory it
413 matches. See the FILE SELECTION section for more information.
414
415
416 --file-prefix prefix
417 --file-prefix-manifest prefix
418 --file-prefix-archive prefix
419 --file-prefix-signature prefix
420 Adds a prefix to either all files or only manifest, archive,
421 signature files.
422
423 The same set of prefixes must be passed in on backup and
424 restore.
425
426 If both global and type-specific prefixes are set, global prefix
427 will go before type-specific prefixes.
428
429 See also A NOTE ON FILENAME PREFIXES
430
431 --file-to-restore path
432 This option may be given in restore mode, causing only path to
433 be restored instead of the entire contents of the backup
434 archive. path should be given relative to the root of the
435 directory backed up.
436
437 --full-if-older-than time
438 Perform a full backup if an incremental backup is requested, but
439 the latest full backup in the collection is older than the given
440 time. See the TIME FORMATS section for more information.
441
442 --force
443 Proceed even if data loss might result. Duplicity will let the
444 user know when this option is required.
445
446 --ftp-passive
447 Use passive (PASV) data connections. The default is to use
448 passive, but to fallback to regular if the passive connection
449 fails or times out.
450
451 --ftp-regular
452 Use regular (PORT) data connections.
453
454 --gio Use the GIO backend and interpret any URLs as GIO would.
455
456 --hidden-encrypt-key key-id
457 Same as --encrypt-key, but it hides user's key id from encrypted
458 file. It uses the gpg's --hidden-recipient command to obfuscate
459 the owner of the backup. On restore, gpg will automatically try
460 all available secret keys in order to decrypt the backup. See
461 gpg(1) for more details.
462
463 --ignore-errors
464 Try to ignore certain errors if they happen. This option is only
465 intended to allow the restoration of a backup in the face of
466 certain problems that would otherwise cause the backup to fail.
467 It is not ever recommended to use this option unless you have a
468 situation where you are trying to restore from backup and it is
469 failing because of an issue which you want duplicity to ignore.
470 Even then, depending on the issue, this option may not have an
471 effect.
472
473 Please note that while ignored errors will be logged, there will
474 be no summary at the end of the operation to tell you what was
475 ignored, if anything. If this is used for emergency restoration
476 of data, it is recommended that you run the backup in such a way
477 that you can revisit the backup log (look for lines containing
478 the string IGNORED_ERROR).
479
480 If you ever have to use this option for reasons that are not
481 understood or understood but not your own responsibility, please
482 contact duplicity maintainers. The need to use this option under
483 production circumstances would normally be considered a bug.
484
485 --imap-full-address email_address
486 The full email address of the user name when logging into an
487 imap server. If not supplied just the user name part of the
488 email address is used.
489
490 --imap-mailbox option
491 Allows you to specify a different mailbox. The default is
492 "INBOX". Other languages may require a different mailbox than
493 the default.
494
495 --gpg-binary file_path
496 Allows you to force duplicity to use file_path as gpg command
497 line binary. Can be an absolute or relative file path or a file
498 name. Default value is 'gpg'. The binary will be localized via
499 the PATH environment variable.
500
501 --gpg-options options
502 Allows you to pass options to gpg encryption. The options list
503 should be of the form "--opt1 --opt2=parm" where the string is
504 quoted and the only spaces allowed are between options.
505
506 --include shell_pattern
507 Similar to --exclude but include matched files instead. Unlike
508 --exclude, this option will also match parent directories of
509 matched files (although not necessarily their contents). See
510 the FILE SELECTION section for more information.
511
512 --include-filelist filename
513 Like --exclude-filelist, but include the listed files instead.
514 See the FILE SELECTION section for more information.
515
516 --include-regexp regexp
517 Include files matching the regular expression regexp. Only
518 files explicitly matched by regexp will be included by this
519 option. See the FILE SELECTION section for more information.
520
521 --log-fd number
522 Write specially-formatted versions of output messages to the
523 specified file descriptor. The format used is designed to be
524 easily consumable by other programs.
525
526 --log-file filename
527 Write specially-formatted versions of output messages to the
528 specified file. The format used is designed to be easily
529 consumable by other programs.
530
531 --max-blocksize number
532 determines the number of the blocks examined for changes during
533 the diff process. For files < 1MB the blocksize is a constant
534 of 512. For files over 1MB the size is given by:
535
536 file_blocksize = int((file_len / (2000 * 512)) * 512)
537 return min(file_blocksize, config.max_blocksize)
538
539 where config.max_blocksize defaults to 2048. If you specify a
540 larger max_blocksize, your difftar files will be larger, but
541 your sigtar files will be smaller. If you specify a smaller
542 max_blocksize, the reverse occurs. The --max-blocksize option
543 should be in multiples of 512.
544
545 --name symbolicname
546 Set the symbolic name of the backup being operated on. The
547 intent is to use a separate name for each logically distinct
548 backup. For example, someone may use "home_daily_s3" for the
549 daily backup of a home directory to Amazon S3. The structure of
550 the name is up to the user, it is only important that the names
551 be distinct. The symbolic name is currently only used to affect
552 the expansion of --archive-dir , but may be used for additional
553 features in the future. Users running more than one distinct
554 backup are encouraged to use this option.
555
556 If not specified, the default value is a hash of the backend
557 URL.
558
559 --no-compression
560 Do not use GZip to compress files on remote system.
561
562 --no-encryption
563 Do not use GnuPG to encrypt files on remote system.
564
565 --no-print-statistics
566 By default duplicity will print statistics about the current
567 session after a successful backup. This switch disables that
568 behavior.
569
570 --no-files-changed
571 By default duplicity will collect file names and change action
572 in memory (add, del, chg) during backup. This can be quite
573 expensive in memory use, especially with millions of small
574 files. This flag turns off that collection. This means that
575 the --file-changed option for collection-status will return
576 nothing.
577
578 --null-separator
579 Use nulls (\0) instead of newlines (\n) as line separators,
580 which may help when dealing with filenames containing newlines.
581 This affects the expected format of the files specified by the
582 --{include|exclude}-filelist switches as well as the format of
583 the directory statistics file.
584
585 --numeric-owner
586 On restore always use the numeric uid/gid from the archive and
587 not the archived user/group names, which is the default
588 behaviour. Recommended for restoring from live cds which might
589 have the users with identical names but different uids/gids.
590
591 --do-not-restore-ownership
592 Ignores the uid/gid from the archive and keeps the current
593 user's one. Recommended for restoring data to mounted
594 filesystem which do not support Unix ownership or when root
595 privileges are not available.
596
597 --num-retries number
598 Number of retries to make on errors before giving up.
599
600 --old-filenames
601 Use the old filename format (incompatible with Windows/Samba)
602 rather than the new filename format.
603
604 --par2-options options
605 Verbatim options to pass to par2.
606
607 --par2-redundancy percent
608 Adjust the level of redundancy in percent for Par2 recovery
609 files (default 10%).
610
611 --par2-volumes number
612 Number of Par2 volumes to create (default 1).
613
614 --progress
615 When selected, duplicity will output the current upload progress
616 and estimated upload time. To annotate changes, it will perform
617 a first dry-run before a full or incremental, and then runs the
618 real operation estimating the real upload progress.
619
620 --progress-rate number
621 Sets the update rate at which duplicity will output the upload
622 progress messages (requires --progress option). Default is to
623 print the status each 3 seconds.
624
625 --rename <original path> <new path>
626 Treats the path orig in the backup as if it were the path new.
627 Can be passed multiple times. An example:
628
629 duplicity restore --rename Documents/metal Music/metal
630 sftp://uid@other.host/some_dir /home/me
631
632 --rsync-options options
633 Allows you to pass options to the rsync backend. The options
634 list should be of the form "opt1=parm1 opt2=parm2" where the
635 option string is quoted and the only spaces allowed are between
636 options. The option string will be passed verbatim to rsync,
637 after any internally generated option designating the remote
638 port to use. Here is a possibly useful example:
639
640 duplicity --rsync-options="--partial-dir=.rsync-partial"
641 /home/me rsync://uid@other.host/some_dir
642
643 --s3-endpoint-url url
644 Specifies the endpoint URL of the S3 storage.
645
646 NOTE: Due to API restrictions the legacy backend boto will use
647 only the values scheme (protocol) and hostname from the given
648 url. Choosing 'http://' will disable SSL encryption, just as if
649 --s3-unencrypted-connection were set.
650
651 --s3-european-buckets
652 When using the Amazon S3 backend, create buckets in Europe
653 instead of the default (requires --s3-use-new-style ). Also see
654 the EUROPEAN S3 BUCKETS section.
655
656 NOTE: This option does not apply when using the boto3 backend,
657 which does not create buckets.
658
659 See also A NOTE ON AMAZON S3 below.
660
661 --s3-multipart-chunk-size
662 Chunk size (in MB, default is 25MB) used for S3 multipart
663 uploads. Make this smaller than --volsize to maximize the use of
664 your bandwidth. For example, a chunk size of 10MB with a volsize
665 of 30MB will result in 3 chunks per volume upload.
666
667 See also A NOTE ON AMAZON S3 below.
668
669 --s3-multipart-max-procs
670 Specify the maximum number of processes to spawn when performing
671 a multipart upload to S3. By default, this will choose the
672 number of processors detected on your system (e.g. 4 for a
673 4-core system). You can adjust this number as required to ensure
674 you don't overload your system while maximizing the use of your
675 bandwidth.
676
677 NOTE: This has no effect when using boto3 backend.
678
679 See also A NOTE ON AMAZON S3 below.
680
681 --s3-multipart-max-timeout
682 You can control the maximum time (in seconds) a multipart upload
683 can spend on uploading a single chunk to S3. This may be useful
684 if you find your system hanging on multipart uploads or if you'd
685 like to control the time variance when uploading to S3 to ensure
686 you kill connections to slow S3 endpoints.
687
688 NOTE: This has no effect when using boto3 backend.
689
690 See also A NOTE ON AMAZON S3 below.
691
692 --s3-region-name
693 Specifies the region of the S3 storage.
694
695 NOTE: Only in boto3 backend.
696
697 --s3-unencrypted-connection
698 Disable SSL for connections to S3. This may be much faster, at
699 some cost to confidentiality.
700
701 With this option set, anyone between your computer and S3 can
702 observe the traffic and will be able to tell: that you are using
703 Duplicity, the name of the bucket, your AWS Access Key ID, the
704 increment dates and the amount of data in each increment.
705
706 This option affects only the connection, not the GPG encryption
707 of the backup increment files. Unless that is disabled, an
708 observer will not be able to see the file names or contents.
709
710 See also A NOTE ON AMAZON S3 below.
711
712 --s3-use-deep-archive
713 Store volumes using Glacier Deep Archive S3 when uploading to
714 Amazon S3. This storage class has a lower cost of storage but a
715 higher per-request cost along with delays of up to 48 hours from
716 the time of retrieval request. This storage cost is calculated
717 against a 180-day storage minimum. According to Amazon this
718 storage is ideal for data archiving and long-term backup
719 offering 99.999999999% durability. To restore a backup you will
720 have to manually migrate all data stored on AWS Glacier Deep
721 Archive back to Standard S3 and wait for AWS to complete the
722 migration.
723
724 NOTE: Duplicity will store the manifest.gpg files from full and
725 incremental backups on AWS S3 standard storage to allow quick
726 retrieval for later incremental backups, all other data is
727 stored in S3 Glacier Deep Archive.
728
729 --s3-use-glacier
730 Store volumes using Glacier Flexible Storage when uploading to
731 Amazon S3. This storage class has a lower cost of storage but a
732 higher per-request cost along with delays of up to 12 hours from
733 the time of retrieval request. This storage cost is calculated
734 against a 90-day storage minimum. According to Amazon this
735 storage is ideal for data archiving and long-term backup
736 offering 99.999999999% durability. To restore a backup you will
737 have to manually migrate all data stored on AWS Glacier back to
738 Standard S3 and wait for AWS to complete the migration.
739
740 NOTE: Duplicity will store the manifest.gpg files from full and
741 incremental backups on AWS S3 standard storage to allow quick
742 retrieval for later incremental backups, all other data is
743 stored in S3 Glacier.
744
745 --s3-use-glacier-ir
746 Store volumes using Glacier Instant Retrieval when uploading to
747 Amazon S3. This storage class is similar to Glacier Flexible
748 Storage but offers instant retrieval at standard speeds.
749
750 NOTE: Duplicity will store the manifest.gpg files from full and
751 incremental backups on AWS S3 standard storage to allow quick
752 retrieval for later incremental backups, all other data is
753 stored in S3 Glacier.
754
755 --s3-use-ia
756 Store volumes using Standard - Infrequent Access when uploading
757 to Amazon S3. This storage class has a lower storage cost but a
758 higher per-request cost, and the storage cost is calculated
759 against a 30-day storage minimum. According to Amazon, this
760 storage is ideal for long-term file storage, backups, and
761 disaster recovery.
762
763 --s3-use-multiprocessing
764 Allow multipart volumne uploads to S3 through multiprocessing.
765 This option requires Python 2.6 and can be used to make uploads
766 to S3 more efficient. If enabled, files duplicity uploads to S3
767 will be split into chunks and uploaded in parallel. Useful if
768 you want to saturate your bandwidth or if large files are
769 failing during upload.
770
771 NOTE: This has no effect when using the boto3 backend. Boto3
772 always attempts to use multiprocessing.
773
774 See also A NOTE ON AMAZON S3 below.
775
776 --s3-use-new-style
777 When operating on Amazon S3 buckets, use new-style subdomain
778 bucket addressing. This is now the preferred method to access
779 Amazon S3, but is not backwards compatible if your bucket name
780 contains upper-case characters or other characters that are not
781 valid in a hostname.
782
783 NOTE: This option has no effect when using the boto3 backend,
784 which will always use new style subdomain bucket naming.
785
786 See also A NOTE ON AMAZON S3 below.
787
788 --s3-use-onezone-ia
789 Store volumes using One Zone - Infrequent Access when uploading
790 to Amazon S3. This storage is similar to Standard - Infrequent
791 Access, but only stores object data in one Availability Zone.
792
793 --s3-use-rrs
794 Store volumes using Reduced Redundancy Storage when uploading to
795 Amazon S3. This will lower the cost of storage but also lower
796 the durability of stored volumes to 99.99% instead the
797 99.999999999% durability offered by Standard Storage on S3.
798
799 --s3-use-server-side-encryption
800 Allow use of server side encryption in S3
801
802 --s3-use-server-side-kms-encryption
803 --s3-kms-key-id key_id
804 --s3-kms-grant grant
805 Enable server-side encryption using key management service.
806
807 --scp-command command
808 (only ssh pexpect backend with --use-scp enabled) The command
809 will be used instead of "scp" to send or receive files. To list
810 and delete existing files, the sftp command is used.
811 See also A NOTE ON SSH BACKENDS section SSH pexpect backend.
812
813 --sftp-command command
814 (only ssh pexpect backend) The command will be used instead of
815 "sftp".
816 See also A NOTE ON SSH BACKENDS section SSH pexpect backend.
817
818 --short-filenames
819 If this option is specified, the names of the files duplicity
820 writes will be shorter (about 30 chars) but less understandable.
821 This may be useful when backing up to MacOS or another OS or FS
822 that doesn't support long filenames.
823
824 --sign-key key-id
825 This option can be used when backing up, restoring or verifying.
826 When backing up, all backup files will be signed with keyid key.
827 When restoring, duplicity will signal an error if any remote
828 file is not signed with the given key-id. The key-id can be
829 given in any of the formats supported by GnuPG; see gpg(1),
830 section "HOW TO SPECIFY A USER ID" for details. Should be
831 specified only once because currently only one signing key is
832 supported. Last entry overrides all other entries.
833 See also A NOTE ON SYMMETRIC ENCRYPTION AND SIGNING
834
835 --ssh-askpass
836 Tells the ssh backend to prompt the user for the remote system
837 password, if it was not defined in target url and no
838 FTP_PASSWORD env var is set. This password is also used for
839 passphrase-protected ssh keys.
840
841 --ssh-options options
842 Allows you to pass options to the ssh backend. Can be specified
843 multiple times or as a space separated options list. The
844 options list should be of the form "-oOpt1='parm1'
845 -oOpt2='parm2'" where the option string is quoted and the only
846 spaces allowed are between options. The option string will be
847 passed verbatim to both scp and sftp, whose command line syntax
848 differs slightly hence the options should therefore be given in
849 the long option format described in ssh_config(5).
850
851 example of a list:
852
853 duplicity --ssh-options="-oProtocol=2
854 -oIdentityFile='/my/backup/id'" /home/me
855 scp://user@host/some_dir
856
857 example with multiple parameters:
858
859 duplicity --ssh-options="-oProtocol=2" --ssh-
860 options="-oIdentityFile='/my/backup/id'" /home/me
861 scp://user@host/some_dir
862
863 NOTE: The ssh paramiko backend currently supports only the -i or
864 -oIdentityFile or -oUserKnownHostsFile or -oGlobalKnownHostsFile
865 settings. If needed provide more host specific options via
866 ssh_config file.
867
868 --ssl-cacert-file file
869 (only webdav & lftp backend) Provide a cacert file for ssl
870 certificate verification.
871
872 See also A NOTE ON SSL CERTIFICATE VERIFICATION.
873
874 --ssl-cacert-path path/to/certs/
875 (only webdav backend and python 2.7.9+ OR lftp+webdavs and a
876 recent lftp) Provide a path to a folder containing cacert files
877 for ssl certificate verification.
878
879 See also A NOTE ON SSL CERTIFICATE VERIFICATION.
880
881 --ssl-no-check-certificate
882 (only webdav & lftp backend) Disable ssl certificate
883 verification.
884
885 See also A NOTE ON SSL CERTIFICATE VERIFICATION.
886
887 --swift-storage-policy
888 Use this storage policy when operating on Swift containers.
889
890 See also A NOTE ON SWIFT (OPENSTACK OBJECT STORAGE) ACCESS.
891
892 --metadata-sync-mode mode
893 This option defaults to 'partial', but you can set it to 'full'
894
895 Use 'partial' to avoid syncing metadata for backup chains that
896 you are not going to use. This saves time when restoring for
897 the first time, and lets you restore an old backup that was
898 encrypted with a different passphrase by supplying only the
899 target passphrase.
900
901 Use 'full' to sync metadata for all backup chains on the remote.
902
903 --tempdir directory
904 Use this existing directory for duplicity temporary files
905 instead of the system default, which is usually the /tmp
906 directory. This option supersedes any environment variable.
907
908 See also ENVIRONMENT VARIABLES.
909
910 -ttime, --time time, --restore-time time
911 Specify the time from which to restore or list files.
912
913 --time-separator char
914 Use char as the time separator in filenames instead of colon
915 (":").
916
917 --timeout seconds
918 Use seconds as the socket timeout value if duplicity begins to
919 timeout during network operations. The default is 30 seconds.
920
921 --use-agent
922 If this option is specified, then --use-agent is passed to the
923 GnuPG encryption process and it will try to connect to gpg-agent
924 before it asks for a passphrase for --encrypt-key or --sign-key
925 if needed.
926
927 NOTE: Contrary to previous versions of duplicity, this option
928 will also be honored by GnuPG 2 and newer versions. If GnuPG 2
929 is in use, duplicity passes the option --pinentry-mode=loopback
930 to the the gpg process unless --use-agent is specified on the
931 duplicity command line. This has the effect that GnuPG 2 uses
932 the agent only if --use-agent is given, just like GnuPG 1.
933
934 --verbosity level, -vlevel
935 Specify output verbosity level (log level). Named levels and
936 corresponding values are 0 Error, 2 Warning, 4 Notice (default),
937 8 Info, 9 Debug (noisiest).
938 level may also be
939 a character: e, w, n, i, d
940 a word: error, warning, notice, info, debug
941
942 The options -v4, -vn and -vnotice are functionally equivalent,
943 as are the mixed/upper-case versions -vN, -vNotice and -vNOTICE.
944
945 --version
946 Print duplicity's version and quit.
947
948 --volsize number
949 Change the volume size to number MB. Default is 200MB.
950
951 --webdav-headers csv formatted key,value pairs
952 The input format is comma separated list of key,value pairs.
953 Standard CSV encoding may be used.
954
955 For example to set a Cookie use 'Cookie,name=value', or
956 '"Cookie","name=value"'.
957
958 You can set multiple headers, e.g.
959 '"Cookie","name=value","Authorization","xxx"'.
960
962 TMPDIR, TEMP, TMP
963 In decreasing order of importance, specifies the directory to
964 use for temporary files (inherited from Python's tempfile
965 module). Eventually the option --tempdir supercedes any of
966 these.
967 FTP_PASSWORD
968 Supported by most backends which are password capable. More
969 secure than setting it in the backend url (which might be
970 readable in the operating systems process listing to other users
971 on the same machine).
972 PASSPHRASE
973 This passphrase is passed to GnuPG. If this is not set, the user
974 will be prompted for the passphrase.
975 SIGN_PASSPHRASE
976 The passphrase to be used for --sign-key. If ommitted and sign
977 key is also one of the keys to encrypt against PASSPHRASE will
978 be reused instead. Otherwise, if passphrase is needed but not
979 set the user will be prompted for it.
980
981 Other environment variables may be used to configure specific
982 backends. See the notes for the particular backend.
983
985 Duplicity uses the URL format (as standard as possible) to define data
986 locations. Major difference is that the whole host section is optional
987 for some backends.
988 NOTE: If path starts with an extra '/' it usually denotes an absolute
989 path on the backend.
990
991 The generic format for a URL is:
992
993 scheme://[[user[:password]@]host[:port]/][/]path
994
995 or
996
997 scheme://[/]path
998
999 It is not recommended to expose the password on the command line since
1000 it could be revealed to anyone with permissions to do process listings,
1001 it is permitted however. Consider setting the environment variable
1002 FTP_PASSWORD instead, which is used by most, if not all backends,
1003 regardless of it's name.
1004
1005 In protocols that support it, the path may be preceded by a single
1006 slash, '/path', to represent a relative path to the target home
1007 directory, or preceded by a double slash, '//path', to represent an
1008 absolute filesystem path.
1009
1010 NOTE: Scheme (protocol) access may be provided by more than one
1011 backend. In case the default backend is buggy or simply not working in
1012 a specific case it might be worth trying an alternative implementation.
1013 Alternative backends can be selected by prefixing the scheme with the
1014 name of the alternative backend e.g. ncftp+ftp:// and are mentioned
1015 below the scheme's syntax summary.
1016
1017 Formats of each of the URL schemes follow:
1018
1019 Amazon Drive Backend
1020 ad://some_dir
1021
1022 See also A NOTE ON AMAZON DRIVE
1023
1024 Azure
1025 azure://container-name
1026
1027 See also A NOTE ON AZURE ACCESS
1028
1029 B2
1030 b2://account_id[:application_key]@bucket_name/[folder/]
1031
1032 Box
1033 box:///some_dir[?config=path_to_config]
1034
1035 See also A NOTE ON BOX ACCESS
1036
1037 Cloud Files (Rackspace)
1038 cf+http://container_name
1039
1040 See also A NOTE ON CLOUD FILES ACCESS
1041
1042 Dropbox
1043 dpbx:///some_dir
1044
1045 Make sure to read A NOTE ON DROPBOX ACCESS first!
1046
1047 File (local file system)
1048 file://[relative|/absolute]/local/path
1049
1050 FISH (Files transferred over Shell protocol) over ssh
1051 fish://user[:password]@other.host[:port]/[relative|/absolute]_path
1052
1053 FTP
1054 ftp[s]://user[:password]@other.host[:port]/some_dir
1055
1056 NOTE: use lftp+, ncftp+ prefixes to enforce a specific backend,
1057 default is lftp+ftp://...
1058
1059 Google Cloud Storage (GCS via Interoperable Access)
1060 s3://bucket[/path]
1061
1062 NOTE: use boto+gs://bucket[/path] or boto+s3://bucket[/path] to
1063 use legacy boto backend. default is boto3+s3://
1064
1065 See A NOTE ON GOOGLE CLOUD STORAGE about needed endpoint option
1066 and env vars for authentification.
1067
1068 Google Docs
1069 gdocs://user[:password]@other.host/some_dir
1070
1071 NOTE: use pydrive+, gdata+ prefixes to enforce a specific
1072 backend, default is pydrive+gdocs://...
1073
1074 Google Drive
1075
1076 gdrive://<service account' email
1077 address>@developer.gserviceaccount.com/some_dir
1078
1079 See also A NOTE ON GDRIVE BACKEND below.
1080
1081 HSI
1082 hsi://user[:password]@other.host/some_dir
1083
1084 hubiC
1085 cf+hubic://container_name
1086
1087 See also A NOTE ON HUBIC
1088
1089 IMAP email storage
1090 imap[s]://user[:password]@host.com[/from_address_prefix]
1091
1092 See also A NOTE ON IMAP
1093
1094 MediaFire
1095 mf://user[:password]@mediafire.com/some_dir
1096
1097 See also A NOTE ON MEDIAFIRE BACKEND below.
1098
1099 MEGA.nz cloud storage (only works for accounts created prior to
1100 November 2018, uses "megatools")
1101 mega://user[:password]@mega.nz/some_dir
1102
1103 NOTE: if not given in the URL, relies on password being stored
1104 within $HOME/.megarc (as used by the "megatools" utilities)
1105
1106 MEGA.nz cloud storage (works for all MEGA accounts, uses "MEGAcmd"
1107 tools)
1108 megav2://user[:password]@mega.nz/some_dir
1109 megav3://user[:password]@mega.nz/some_dir[?no_logout=1] (For
1110 latest MEGAcmd)
1111
1112 NOTE: despite "MEGAcmd" no longer uses a configuration file, for
1113 convenience storing the user password this backend searches it
1114 in the $HOME/.megav2rc file (same syntax as the old
1115 $HOME/.megarc)
1116 [Login]
1117 Username = MEGA_USERNAME
1118 Password = MEGA_PASSWORD
1119
1120 multi
1121 multi:///path/to/config.json
1122
1123 See also A NOTE ON MULTI BACKEND below.
1124
1125 OneDrive Backend
1126 onedrive://some_dir
1127
1128 Par2 Wrapper Backend
1129 par2+scheme://[user[:password]@]host[:port]/[/]path
1130
1131 See also A NOTE ON PAR2 WRAPPER BACKEND
1132
1133 Public Cloud Archive (OVH)
1134 pca://container_name[/prefix]
1135
1136 See also A NOTE ON PCA ACCESS
1137
1138 pydrive
1139 pydrive://<service account' email
1140 address>@developer.gserviceaccount.com/some_dir
1141
1142 See also A NOTE ON PYDRIVE BACKEND below.
1143
1144 Rclone Backend
1145 rclone://remote:/some_dir
1146
1147 See also A NOTE ON RCLONE BACKEND
1148
1149 Rsync via daemon
1150 rsync://user[:password]@host.com[:port]::[/]module/some_dir
1151
1152 Rsync over ssh (only key auth)
1153 rsync://user@host.com[:port]/[relative|/absolute]_path
1154
1155 S3 storage (Amazon)
1156 s3:///bucket_name[/path]
1157
1158 defaults to the boto3 backend boto3+s3://
1159 alternatively try the legacy boto backend
1160 boto+s3://host[:port]/bucket_name[/path]
1161
1162 For details see A NOTE ON AMAZON S3 below.
1163
1164 SCP/SFTP Secure Copy Protocol/SSH File Transfer Protocol
1165 scp://.. or
1166 sftp://user[:password]@other.host[:port]/[relative|/absolute]_path
1167
1168 defaults are paramiko+scp:// and paramiko+sftp://
1169 alternatively try pexpect+scp://, pexpect+sftp://, lftp+sftp://
1170 See also --ssh-askpass, --ssh-options and A NOTE ON SSH
1171 BACKENDS.
1172
1173 slate
1174 slate://[slate-id]
1175
1176 See also A NOTE ON SLATE BACKEND
1177
1178 Swift (Openstack)
1179 swift://container_name[/prefix]
1180
1181 See also A NOTE ON SWIFT (OPENSTACK OBJECT STORAGE) ACCESS
1182
1183 Tahoe-LAFS
1184 tahoe://alias/directory
1185
1186 WebDAV
1187 webdav[s]://user[:password]@other.host[:port]/some_dir
1188
1189 alternatively try lftp+webdav[s]://
1190
1192 duplicity uses time strings in two places. Firstly, many of the files
1193 duplicity creates will have the time in their filenames in the w3
1194 datetime format as described in a w3 note at http://www.w3.org/TR/NOTE-
1195 datetime. Basically they look like "2001-07-15T04:09:38-07:00", which
1196 means what it looks like. The "-07:00" section means the time zone is
1197 7 hours behind UTC.
1198 Secondly, the -t, --time, and --restore-time options take a time
1199 string, which can be given in any of several formats:
1200 1. the string "now" (refers to the current time)
1201 2. a sequences of digits, like "123456890" (indicating the time in
1202 seconds after the epoch)
1203 3. A string like "2002-01-25T07:00:00+02:00" in datetime format
1204 4. An interval, which is a number followed by one of the characters
1205 s, m, h, D, W, M, or Y (indicating seconds, minutes, hours,
1206 days, weeks, months, or years respectively), or a series of such
1207 pairs. In this case the string refers to the time that preceded
1208 the current time by the length of the interval. For instance,
1209 "1h78m" indicates the time that was one hour and 78 minutes ago.
1210 The calendar here is unsophisticated: a month is always 30 days,
1211 a year is always 365 days, and a day is always 86400 seconds.
1212 5. A date format of the form YYYY/MM/DD, YYYY-MM-DD, MM/DD/YYYY, or
1213 MM-DD-YYYY, which indicates midnight on the day in question,
1214 relative to the current time zone settings. For instance,
1215 "2002/3/5", "03-05-2002", and "2002-3-05" all mean March 5th,
1216 2002.
1217
1219 When duplicity is run, it searches through the given source directory
1220 and backs up all the files specified by the file selection system. The
1221 file selection system comprises a number of file selection conditions,
1222 which are set using one of the following command line options:
1223 --exclude
1224 --exclude-device-files
1225 --exclude-if-present
1226 --exclude-filelist
1227 --exclude-regexp
1228 --include
1229 --include-filelist
1230 --include-regexp
1231 Each file selection condition either matches or doesn't match a given
1232 file. A given file is excluded by the file selection system exactly
1233 when the first matching file selection condition specifies that the
1234 file be excluded; otherwise the file is included.
1235
1236 For instance,
1237 duplicity --include /usr --exclude /usr /usr
1238 scp://user@host/backup
1239 is exactly the same as
1240 duplicity /usr scp://user@host/backup
1241 because the include and exclude directives match exactly the same
1242 files, and the --include comes first, giving it precedence. Similarly,
1243 duplicity --include /usr/local/bin --exclude /usr/local /usr
1244 scp://user@host/backup
1245 would backup the /usr/local/bin directory (and its contents), but not
1246 /usr/local/doc.
1247
1248 The include, exclude, include-filelist, and exclude-filelist options
1249 accept some extended shell globbing patterns. These patterns can
1250 contain *, **, ?, and [...] (character ranges). As in a normal shell,
1251 * can be expanded to any string of characters not containing "/", ?
1252 expands to any character except "/", and [...] expands to a single
1253 character of those characters specified (ranges are acceptable). The
1254 new special pattern, **, expands to any string of characters whether or
1255 not it contains "/". Furthermore, if the pattern starts with
1256 "ignorecase:" (case insensitive), then this prefix will be removed and
1257 any character in the string can be replaced with an upper- or lowercase
1258 version of itself.
1259
1260 Remember that you may need to quote these characters when typing them
1261 into a shell, so the shell does not interpret the globbing patterns
1262 before duplicity sees them.
1263
1264 The --exclude pattern option matches a file if:
1265 1. pattern can be expanded into the file's filename, or
1266 2. the file is inside a directory matched by the option.
1267 Conversely, the --include pattern matches a file if:
1268 1. pattern can be expanded into the file's filename, or
1269 2. the file is inside a directory matched by the option, or
1270 3. the file is a directory which contains a file matched by the
1271 option.
1272 For example,
1273
1274 --exclude /usr/local
1275
1276 matches e.g. /usr/local, /usr/local/lib, and /usr/local/lib/netscape.
1277 It is the same as --exclude /usr/local --exclude '/usr/local/**'.
1278 On the other hand
1279
1280 --include /usr/local
1281
1282 specifies that /usr, /usr/local, /usr/local/lib, and
1283 /usr/local/lib/netscape (but not /usr/doc) all be backed up. Thus you
1284 don't have to worry about including parent directories to make sure
1285 that included subdirectories have somewhere to go.
1286 Finally,
1287
1288 --include ignorecase:'/usr/[a-z0-9]foo/*/**.py'
1289
1290 would match a file like /usR/5fOO/hello/there/world.py. If it did
1291 match anything, it would also match /usr. If there is no existing file
1292 that the given pattern can be expanded into, the option will not match
1293 /usr alone.
1294
1295 The --include-filelist, and --exclude-filelist, options also introduce
1296 file selection conditions. They direct duplicity to read in a text
1297 file (either ASCII or UTF-8), each line of which is a file
1298 specification, and to include or exclude the matching files. Lines are
1299 separated by newlines or nulls, depending on whether the --null-
1300 separator switch was given. Each line in the filelist will be
1301 interpreted as a globbing pattern the way --include and --exclude
1302 options are interpreted, except that lines starting with "+ " are
1303 interpreted as include directives, even if found in a filelist
1304 referenced by --exclude-filelist. Similarly, lines starting with "- "
1305 exclude files even if they are found within an include filelist.
1306 For example, if file "list.txt" contains the lines:
1307
1308 /usr/local
1309 - /usr/local/doc
1310 /usr/local/bin
1311 + /var
1312 - /var
1313
1314 then --include-filelist list.txt would include /usr, /usr/local, and
1315 /usr/local/bin. It would exclude /usr/local/doc,
1316 /usr/local/doc/python, etc. It would also include /usr/local/man, as
1317 this is included within /user/local. Finally, it is undefined what
1318 happens with /var. A single file list should not contain conflicting
1319 file specifications.
1320
1321 Each line in the filelist will also be interpreted as a globbing
1322 pattern the way --include and --exclude options are interpreted. For
1323 instance, if the file "list.txt" contains the lines:
1324
1325 dir/foo
1326 + dir/bar
1327 - **
1328
1329 Then --include-filelist list.txt would be exactly the same as
1330 specifying --include dir/foo --include dir/bar --exclude ** on the
1331 command line.
1332
1333 Finally, the --include-regexp and --exclude-regexp options allow files
1334 to be included and excluded if their filenames match a python regular
1335 expression. Regular expression syntax is too complicated to explain
1336 here, but is covered in Python's library reference. Unlike the
1337 --include and --exclude options, the regular expression options don't
1338 match files containing or contained in matched files. So for instance
1339 --include '[0-9]{7}(?!foo)'
1340 matches any files whose full pathnames contain 7 consecutive digits
1341 which aren't followed by 'foo'. However, it wouldn't match /home even
1342 if /home/ben/1234567 existed.
1343
1345 1. The API Keys used for Amazon Drive have not been granted
1346 production limits. Amazon do not say what the development
1347 limits are and are not replying to to requests to whitelist
1348 duplicity. A related tool, acd_cli, was demoted to development
1349 limits, but continues to work fine except for cases of excessive
1350 usage. If you experience throttling and similar issues with
1351 Amazon Drive using this backend, please report them to the
1352 mailing list.
1353 2. If you previously used the acd+acdcli backend, it is strongly
1354 recommended to update to the ad backend instead, since it
1355 interfaces directly with Amazon Drive. You will need to setup
1356 the OAuth once again, but can otherwise keep your backups and
1357 config.
1358
1360 When backing up to Amazon S3, two backend implementations are
1361 available. The older boto library, which is deprecated and is no
1362 longer maintained. And the recent boto3 backend based on the newer
1363 boto3 library. The new backend fixes several known limitations in the
1364 older backend, which developed as Amazon S3 evolved.
1365
1366 The boto3 backend should behave largely the same as the older backend,
1367 but there are some differences in the supported "--s3-..." options.
1368 Additionally, there are some compatibility differences.
1369 See the documentation of each option above regarding differences
1370 related to each backend.
1371
1372 The boto3 backend does not support bucket creation. This deliberate
1373 choice simplifies the code, and side steps problems related to region
1374 selection. Additionally, it is probably not a good practice to give
1375 your backup role bucket creation rights. In most cases the role used
1376 for backups should probably be limited to specific buckets.
1377
1378 The boto3 backend only supports newer domain style buckets. Amazon is
1379 moving to deprecate the older bucket style, so migration is
1380 recommended. Use the boto backend for compatibility with buckets using
1381 older naming conventions.
1382
1383 The boto3 backend does not currently support initiating restores from
1384 the glacier storage class. When restoring a backup from glacier or
1385 glacier deep archive, the backup files must first be restored out of
1386 band. There are multiple options when restoring backups from cold
1387 storage, which vary in both cost and speed. See Amazon's documentation
1388 for details.
1389
1390 Both backends use environment variables for authentification:
1391 AWS_ACCESS_KEY_ID (required),
1392 AWS_SECRET_ACCESS_KEY (required)
1393 or
1394 BOTO_CONFIG (required) pointing to a boto config file.
1395 For simplicity's sake we will document the use of the AWS_* vars only.
1396 Research boto documentation available in the web if you want to use
1397 the config file.
1398
1399 boto3 backend example backup command line:
1400
1401 AWS_ACCESS_KEY_ID=<key_id> AWS_SECRET_ACCESS_KEY=<access_key>
1402 duplicity /some/path s3:///bucket/subfolder
1403
1404 you may add --s3-endpoint-url and --s3-region-name and other s3 options
1405 documented above if needed.
1406
1407 legacy boto backend example backup command line:
1408
1409 AWS_ACCESS_KEY_ID=<key_id> AWS_SECRET_ACCESS_KEY=<access_key>
1410 duplicity /some/path boto+s3://[host:port]/bucket/subfolder
1411
1412 The url host setting is optional and allows to define a custom endpoint
1413 host. you may add --s3-european-buckets and other s3 options documented
1414 above if needed.
1415
1416
1418 The Azure backend requires the Microsoft Azure Storage Blobs client
1419 library for Python to be installed on the system. See REQUIREMENTS.
1420
1421 It uses the environment variable AZURE_CONNECTION_STRING (required).
1422 This string contains all necessary information such as Storage Account
1423 name and the key for authentication. You can find it under Access Keys
1424 for the storage account.
1425
1426 Duplicity will take care to create the container when performing the
1427 backup. Do not create it manually before.
1428
1429 A container name (as given as the backup url) must be a valid DNS name,
1430 conforming to the following naming rules:
1431
1432 1. Container names must start with a letter or number, and
1433 can contain only letters, numbers, and the dash (-)
1434 character.
1435 2. Every dash (-) character must be immediately preceded and
1436 followed by a letter or number; consecutive dashes are
1437 not permitted in container names.
1438 3. All letters in a container name must be lowercase.
1439 4. Container names must be from 3 through 63 characters
1440 long.
1441
1442 These rules come from Azure; see https://docs.microsoft.com/en-
1443 us/rest/api/storageservices/naming-and-referencing-
1444 containers--blobs--and-metadata
1445
1447 The box backend requires boxsdk with jwt support to be installed on the
1448 system. See REQUIREMENTS.
1449
1450 It uses the environment variable BOX_CONFIG_PATH (optional). This
1451 string contains the path to box custom app's config.json. Either this
1452 environment variable or the config query parameter in the url need to
1453 be specified, if both are specified, query paramter takes precedence.
1454
1455 Create a Box custom app
1456 In order to use box backend, user need to create a box custom app in
1457 the box developer console (https://app.box.com/developers/console).
1458
1459 After create a new custom app, please make sure it is configured as
1460 follow:
1461
1462 1. Choose "App Access Only" for "App Access Level"
1463 2. Check "Write all files and folders stored in Box"
1464 3. Generate a Public/Private Keypair
1465
1466 The user also need to grant the created custom app permission in the
1467 admin console (https://app.box.com/master/custom-apps) by clicking the
1468 "+" button and enter the client_id which can be found on the custom
1469 app's configuration page.
1470
1472 Pyrax is Rackspace's next-generation Cloud management API, including
1473 Cloud Files access. The cfpyrax backend requires the pyrax library to
1474 be installed on the system. See REQUIREMENTS.
1475
1476 Cloudfiles is Rackspace's now deprecated implementation of OpenStack
1477 Object Storage protocol. Users wishing to use Duplicity with Rackspace
1478 Cloud Files should migrate to the new Pyrax plugin to ensure support.
1479
1480 The backend requires python-cloudfiles to be installed on the system.
1481 See REQUIREMENTS.
1482
1483 It uses three environment variables for authentification:
1484 CLOUDFILES_USERNAME (required), CLOUDFILES_APIKEY (required),
1485 CLOUDFILES_AUTHURL (optional)
1486
1487 If CLOUDFILES_AUTHURL is unspecified it will default to the value
1488 provided by python-cloudfiles, which points to rackspace, hence this
1489 value must be set in order to use other cloud files providers.
1490
1492 1. First of all Dropbox backend requires valid authentication
1493 token. It should be passed via DPBX_ACCESS_TOKEN environment
1494 variable.
1495 To obtain it please create 'Dropbox API' application at:
1496 https://www.dropbox.com/developers/apps/create
1497 Then visit app settings and just use 'Generated access token'
1498 under OAuth2 section.
1499 Alternatively you can let duplicity generate access token
1500 itself. In such case temporary export DPBX_APP_KEY ,
1501 DPBX_APP_SECRET using values from app settings page and run
1502 duplicity interactively.
1503 It will print the URL that you need to open in the browser to
1504 obtain OAuth2 token for the application. Just follow on-screen
1505 instructions and then put generated token to DPBX_ACCESS_TOKEN
1506 variable. Once done, feel free to unset DPBX_APP_KEY and
1507 DPBX_APP_SECRET
1508
1509 2. "some_dir" must already exist in the Dropbox folder. Depending
1510 on access token kind it may be:
1511 Full Dropbox: path is absolute and starts from 'Dropbox'
1512 root folder.
1513 App Folder: path is related to application folder.
1514 Dropbox client will show it in ~/Dropbox/Apps/<app-name>
1515
1516 3. When using Dropbox for storage, be aware that all files,
1517 including the ones in the Apps folder, will be synced to all
1518 connected computers. You may prefer to use a separate Dropbox
1519 account specially for the backups, and not connect any computers
1520 to that account. Alternatively you can configure selective sync
1521 on all computers to avoid syncing of backup files
1522
1524 Amazon S3 provides the ability to choose the location of a bucket upon
1525 its creation. The purpose is to enable the user to choose a location
1526 which is better located network topologically relative to the user,
1527 because it may allow for faster data transfers.
1528 duplicity will create a new bucket the first time a bucket access is
1529 attempted. At this point, the bucket will be created in Europe if
1530 --s3-european-buckets was given. For reasons having to do with how the
1531 Amazon S3 service works, this also requires the use of the --s3-use-
1532 new-style option. This option turns on subdomain based bucket
1533 addressing in S3. The details are beyond the scope of this man page,
1534 but it is important to know that your bucket must not contain upper
1535 case letters or any other characters that are not valid parts of a
1536 hostname. Consequently, for reasons of backwards compatibility, use of
1537 subdomain based bucket addressing is not enabled by default.
1538 Note that you will need to use --s3-use-new-style for all operations on
1539 European buckets; not just upon initial creation.
1540 You only need to use --s3-european-buckets upon initial creation, but
1541 you may may use it at all times for consistency.
1542 Further note that when creating a new European bucket, it can take a
1543 while before the bucket is fully accessible. At the time of this
1544 writing it is unclear to what extent this is an expected feature of
1545 Amazon S3, but in practice you may experience timeouts, socket errors
1546 or HTTP errors when trying to upload files to your newly created
1547 bucket. Give it a few minutes and the bucket should function normally.
1548
1550 Filename prefixes can be used in multi backend with mirror mode to
1551 define affinity rules. They can also be used in conjunction with S3
1552 lifecycle rules to transition archive files to Glacier, while keeping
1553 metadata (signature and manifest files) on S3.
1554
1555 Duplicity does not require access to archive files except when
1556 restoring from backup.
1557
1559 Overview
1560 Duplicity access to GCS currently relies on it's Interoperability API
1561 (basically S3 for GCS). This needs to actively be enabled before
1562 access is possible. For details read the next section Preparations
1563 below.
1564 Two backends are available to access S3 namely boto3 which is used via
1565 s3:// (alias for boto3+s3:// ) and the legacy boto backend, usable via
1566 boto+s3://.
1567
1568 Preparations
1569 1. login on https://console.cloud.google.com/
1570 2. go to Cloud Storage->Settings->Interoperability
1571 3. create a Service account (if needed)
1572 4. create Service account HMAC access key and secret (!!instantly
1573 copy!! the secret, it can NOT be recovered later)
1574 5. go to Cloud Storage->Browser
1575 6. create a bucket
1576 7. add permissions for Service account that was used to set up
1577 Interoperability access above
1578
1579 Once set up you can use the generated Interoperable Storage Access key
1580 and secret and pass them to duplicity as described in the next section.
1581
1582 Usage
1583 The following examples show accessing GCS via S3 for a collection-
1584 status action. The shown env vars, options and url format can be
1585 applied for all other actions as well of course.
1586
1587 using boto3 supplying the --s3-endpoint-url manually.
1588
1589 AWS_ACCESS_KEY_ID=<keyid> AWS_SECRET_ACCESS_KEY=<secret>
1590 duplicity collection-status s3://<bucket>/<folder>
1591 --s3-endpoint-url=https://storage.googleapis.com
1592
1593 or alternatively with legacy boto using either boto+gs://.
1594
1595 GS_ACCESS_KEY_ID=<keyid> GS_SECRET_ACCESS_KEY=<secret> duplicity
1596 collection-status boto+gs://<bucket>/<folder>
1597
1598 NOTE: The auth env vars are prefixed GS_ in this case!
1599
1600 or boto+s3:// supplying the --s3-endpoint-url manually.
1601
1602 AWS_ACCESS_KEY_ID=<keyid> AWS_SECRET_ACCESS_KEY=<secret>
1603 duplicity collection-status s3://<bucket>/<folder>
1604 --s3-endpoint-url=https://storage.googleapis.com
1605
1606 Alternatively, you can run gsutil config -a to have the Google Cloud
1607 Storage utility populate the ~/.boto configuration file.
1608
1609 NOTE: Also see section URL FORMAT for a brief overview about the url
1610 format expected.
1611
1613 GDrive: is a rewritten PyDrive: backend with less dependencies, and a
1614 simpler setup - it uses the JSON keys downloaded directly from Google
1615 Cloud Console.
1616
1617 Note Google has 2 drive methods, `Shared(previously Team) Drives` and
1618 `My Drive`, both can be shared but require different addressing
1619
1620 For a Google Shared Drives folder
1621
1622 Share Drive ID specified as a query parameter, driveID, in the backend
1623 URL. Example:
1624 gdrive://developer.gserviceaccount.com/target-
1625 folder/?driveID=<SHARED DRIVE ID>
1626
1627 For a Google My Drive based shared folder
1628
1629 MyDrive folder ID specified as a query parameter, myDriveFolderID, in
1630 the backend URL Example
1631 export GOOGLE_SERVICE_ACCOUNT_URL=<serviceaccount-
1632 name>@<serviceaccount-name>.iam.gserviceaccount.com
1633 gdrive://${GOOGLE_SERVICE_ACCOUNT_URL}/<target-folder-name-in-
1634 myDriveFolder>?myDriveFolderID=<google-myDrive-folder-id>
1635
1636
1637 There are also two ways to authenticate to use GDrive: with a regular
1638 account or with a "service account". With a service account, a separate
1639 account is created, that is only accessible with Google APIs and not a
1640 web login. With a regular account, you can store backups in your
1641 normal Google Drive.
1642
1643 To use a service account, go to the Google developers console at
1644 https://console.developers.google.com. Create a project, and make sure
1645 Drive API is enabled for the project. In the "Credentials" section,
1646 click "Create credentials", then select Service Account with JSON key.
1647
1648 The GOOGLE_SERVICE_JSON_FILE environment variable needs to contain the
1649 path to the JSON file on duplicity invocation.
1650
1651 export GOOGLE_SERVICE_JSON_FILE=<path-to-serviceaccount-
1652 credentials.json>
1653
1654
1655 The alternative is to use a regular account. To do this, start as
1656 above, but when creating a new Client ID, select "Create OAuth client
1657 ID", with application type of "Desktop app". Download the
1658 client_secret.json file for the new client, and set the
1659 GOOGLE_CLIENT_SECRET_JSON_FILE environment variable to the path to this
1660 file, and GOOGLE_CREDENTIALS_FILE to a path to a file where duplicity
1661 will keep the authentication token - this location must be writable.
1662
1663 During the first run, you will be prompted to visit an URL in your
1664 browser to grant access to your drive. Once granted, you will receive a
1665 verification code to paste back into Duplicity. The credentials are
1666 then cached in the file references above for future use.
1667
1668 As a sanity check, GDrive checks the host and username from the URL
1669 against the JSON key, and refuses to proceed if the addresses do not
1670 match. Either the email (for the service accounts) or Client ID (for
1671 regular OAuth accounts) must be present in the URL. See URL FORMAT
1672 above.
1673
1675 The hubic backend requires the pyrax library to be installed on the
1676 system. See REQUIREMENTS. You will need to set your credentials for
1677 hubiC in a file called ~/.hubic_credentials, following this pattern:
1678 [hubic]
1679 email = your_email
1680 password = your_password
1681 client_id = api_client_id
1682 client_secret = api_secret_key
1683 redirect_uri = http://localhost/
1684
1686 An IMAP account can be used as a target for the upload. The userid may
1687 be specified and the password will be requested.
1688 The from_address_prefix may be specified (and probably should be). The
1689 text will be used as the "From" address in the IMAP server. Then on a
1690 restore (or list) command the from_address_prefix will distinguish
1691 between different backups.
1692
1694 This backend requires mediafire python library to be installed on the
1695 system. See REQUIREMENTS.
1696
1697 Use URL escaping for username (and password, if provided via command
1698 line):
1699
1700 mf://duplicity%40example.com@mediafire.com/some_folder
1701 The destination folder will be created for you if it does not exist.
1702
1704 The multi backend allows duplicity to combine the storage available in
1705 more than one backend store (e.g., you can store across a google drive
1706 account and a onedrive account to get effectively the combined storage
1707 available in both). The URL path specifies a JSON formated config file
1708 containing a list of the backends it will use. The URL may also specify
1709 "query" parameters to configure overall behavior. Each element of the
1710 list must have a "url" element, and may also contain an optional
1711 "description" and an optional "env" list of environment variables used
1712 to configure that backend.
1713 Query Parameters
1714 Query parameters come after the file URL in standard HTTP format for
1715 example:
1716 multi:///path/to/config.json?mode=mirror&onfail=abort
1717 multi:///path/to/config.json?mode=stripe&onfail=continue
1718 multi:///path/to/config.json?onfail=abort&mode=stripe
1719 multi:///path/to/config.json?onfail=abort
1720 Order does not matter, however unrecognized parameters are considered
1721 an error.
1722 mode=stripe
1723 This mode (the default) performs round-robin access to the list
1724 of backends. In this mode, all backends must be reliable as a
1725 loss of one means a loss of one of the archive files.
1726 mode=mirror
1727 This mode accesses backends as a RAID1-store, storing every file
1728 in every backend and reading files from the first-successful
1729 backend. A loss of any backend should result in no failure.
1730 Note that backends added later will only get new files and may
1731 require a manual sync with one of the other operating ones.
1732 onfail=continue
1733 This setting (the default) continues all write operations in as
1734 best-effort. Any failure results in the next backend tried.
1735 Failure is reported only when all backends fail a given
1736 operation with the error result from the last failure.
1737 onfail=abort
1738 This setting considers any backend write failure as a
1739 terminating condition and reports the error. Data reading and
1740 listing operations are independent of this and will try with the
1741 next backend on failure.
1742 JSON File Example
1743 [
1744 {
1745 "description": "a comment about the backend"
1746 "url": "abackend://myuser@domain.com/backup",
1747 "env": [
1748 {
1749 "name" : "MYENV",
1750 "value" : "xyz"
1751 },
1752 {
1753 "name" : "FOO",
1754 "value" : "bar"
1755 }
1756 ],
1757 "prefixes": ["prefix1_", "prefix2_"]
1758 },
1759 {
1760 "url": "file:///path/to/dir"
1761 }
1762 ]
1763
1765 Par2 Wrapper Backend can be used in combination with all other backends
1766 to create recovery files. Just add par2+ before a regular scheme (e.g.
1767 par2+ftp://user@host/dir or par2+s3+http://bucket_name ). This will
1768 create par2 recovery files for each archive and upload them all to the
1769 wrapped backend.
1770 Before restoring, archives will be verified. Corrupt archives will be
1771 repaired on the fly if there are enough recovery blocks available.
1772 Use --par2-redundancy percent to adjust the size (and redundancy) of
1773 recovery files in percent.
1774
1776 PCA is a long-term data archival solution by OVH. It runs a slightly
1777 modified version of Openstack Swift introducing latency in the data
1778 retrieval process. It is a good pick for a multi backend configuration
1779 where receiving volumes while an other backend is used to store
1780 manifests and signatures.
1781
1782 The backend requires python-switclient to be installed on the system.
1783 python-keystoneclient is also needed to interact with OpenStack's
1784 Keystone Identity service. See REQUIREMENTS.
1785
1786 It uses following environment variables for authentification:
1787 PCA_USERNAME (required), PCA_PASSWORD (required), PCA_AUTHURL
1788 (required), PCA_USERID (optional), PCA_TENANTID (optional, but either
1789 the tenant name or tenant id must be supplied) PCA_REGIONNAME
1790 (optional), PCA_TENANTNAME (optional, but either the tenant name or
1791 tenant id must be supplied)
1792
1793 If the user was previously authenticated, the following environment
1794 variables can be used instead: PCA_PREAUTHURL (required),
1795 PCA_PREAUTHTOKEN (required)
1796
1797 If PCA_AUTHVERSION is unspecified, it will default to version 2.
1798
1800 The pydrive backend requires Python PyDrive package to be installed on
1801 the system. See REQUIREMENTS.
1802
1803 There are two ways to use PyDrive: with a regular account or with a
1804 "service account". With a service account, a separate account is
1805 created, that is only accessible with Google APIs and not a web login.
1806 With a regular account, you can store backups in your normal Google
1807 Drive.
1808
1809 To use a service account, go to the Google developers console at
1810 https://console.developers.google.com. Create a project, and make sure
1811 Drive API is enabled for the project. Under "APIs and auth", click
1812 Create New Client ID, then select Service Account with P12 key.
1813
1814 Download the .p12 key file of the account and convert it to the .pem
1815 format:
1816 openssl pkcs12 -in XXX.p12 -nodes -nocerts > pydriveprivatekey.pem
1817
1818 The content of .pem file should be passed to GOOGLE_DRIVE_ACCOUNT_KEY
1819 environment variable for authentification.
1820
1821 The email address of the account will be used as part of URL. See URL
1822 FORMAT above.
1823
1824 The alternative is to use a regular account. To do this, start as
1825 above, but when creating a new Client ID, select "Installed
1826 application" of type "Other". Create a file with the following content,
1827 and pass its filename in the GOOGLE_DRIVE_SETTINGS environment
1828 variable:
1829 client_config_backend: settings
1830 client_config:
1831 client_id: <Client ID from developers' console>
1832 client_secret: <Client secret from developers' console>
1833 save_credentials: True
1834 save_credentials_backend: file
1835 save_credentials_file: <filename to cache credentials>
1836 get_refresh_token: True
1837
1838 In this scenario, the username and host parts of the URL play no role;
1839 only the path matters. During the first run, you will be prompted to
1840 visit an URL in your browser to grant access to your drive. Once
1841 granted, you will receive a verification code to paste back into
1842 Duplicity. The credentials are then cached in the file references above
1843 for future use.
1844
1846 Rclone is a powerful command line program to sync files and directories
1847 to and from various cloud storage providers.
1848
1849 Usage
1850 Once you have configured an rclone remote via
1851
1852 rclone config
1853
1854 and successfully set up a remote (e.g. gdrive for Google Drive),
1855 assuming you can list your remote files with
1856
1857 rclone ls gdrive:mydocuments
1858
1859 you can start your backup with
1860
1861 duplicity /mydocuments rclone://gdrive:/mydocuments
1862
1863 Please note the slash after the second colon. Some storage provider
1864 will work with or without slash after colon, but some other will not.
1865 Since duplicity will complain about malformed URL if a slash is not
1866 present, always put it after the colon, and the backend will handle it
1867 for you.
1868
1869 Options
1870 Note that all rclone options can be set by env vars as well. This is
1871 properly documented here
1872
1873 https://rclone.org/docs/
1874
1875 but in a nutshell you need to take the long option name, strip the
1876 leading --, change - to _, make upper case and prepend RCLONE_. for
1877 example
1878
1879 the equivalent of '--stats 5s' would be the env var
1880 RCLONE_STATS=5s
1881
1883 Three environment variables are used with the slate backend:
1884 1. `SLATE_API_KEY` - Your slate API key
1885 2. `SLATE_SSL_VERIFY` - either '1'(True) or '0'(False) for ssl
1886 verification (optional - True by default)
1887 3. `PASSPHRASE` - your gpg passhprase for encryption (optional -
1888 will be prompted if not set or not used at all if using the `--no-
1889 encryption` parameter)
1890
1891 To use the slate backend, use the following scheme:
1892 slate://[slate-id]
1893
1894 e.g. Full backup of current directory to slate:
1895 duplicity full . "slate://6920df43-5c3w-2x7i-69aw-2390567uav75"
1896
1897 Here's a demo:
1898 https://gitlab.com/Shr1ftyy/duplicity/uploads/675664ef0eb431d14c8e20045e3fafb6/slate_demo.mp4
1899
1901 The ssh backends support sftp and scp/ssh transport protocols. This is
1902 a known user-confusing issue as these are fundamentally different. If
1903 you plan to access your backend via one of those please inform yourself
1904 about the requirements for a server to support sftp or scp/ssh access.
1905 To make it even more confusing the user can choose between several ssh
1906 backends via a scheme prefix: paramiko+ (default), pexpect+, lftp+... .
1907 paramiko & pexpect support --use-scp, --ssh-askpass and --ssh-options.
1908 Only the pexpect backend allows to define --scp-command and --sftp-
1909 command.
1910 SSH paramiko backend (default) is a complete reimplementation of ssh
1911 protocols natively in python. Advantages are speed and maintainability.
1912 Minor disadvantage is that extra packages are needed as listed in
1913 REQUIREMENTS. In sftp (default) mode all operations are done via the
1914 according sftp commands. In scp mode ( --use-scp ) though scp access is
1915 used for put/get operations but listing is done via ssh remote shell.
1916 SSH pexpect backend is the legacy ssh backend using the command line
1917 ssh binaries via pexpect. Older versions used scp for get and put
1918 operations and sftp for list and delete operations. The current
1919 version uses sftp for all four supported operations, unless the --use-
1920 scp option is used to revert to old behavior.
1921 SSH lftp backend is simply there because lftp can interact with the ssh
1922 cmd line binaries. It is meant as a last resort in case the above
1923 options fail for some reason.
1924
1925 Why use sftp instead of scp?
1926 The change to sftp was made in order to allow the remote system to
1927 chroot the backup, thus providing better security and because it does
1928 not suffer from shell quoting issues like scp. Scp also does not
1929 support any kind of file listing, so sftp or ssh access will always be
1930 needed in addition for this backend mode to work properly. Sftp does
1931 not have these limitations but needs an sftp service running on the
1932 backend server, which is sometimes not an option.
1933
1935 Certificate verification as implemented right now [02.2016] only in the
1936 webdav and lftp backends. older pythons 2.7.8- and older lftp binaries
1937 need a file based database of certification authority certificates
1938 (cacert file).
1939 Newer python 2.7.9+ and recent lftp versions however support the system
1940 default certificates (usually in /etc/ssl/certs) and also giving an
1941 alternative ca cert folder via --ssl-cacert-path.
1942 The cacert file has to be a PEM formatted text file as currently
1943 provided by the CURL project. See
1944 http://curl.haxx.se/docs/caextract.html
1945 After creating/retrieving a valid cacert file you should copy it to
1946 either
1947 ~/.duplicity/cacert.pem
1948 ~/duplicity_cacert.pem
1949 /etc/duplicity/cacert.pem
1950 Duplicity searches it there in the same order and will fail if it can't
1951 find it. You can however specify the option --ssl-cacert-file <file>
1952 to point duplicity to a copy in a different location.
1953 Finally there is the --ssl-no-check-certificate option to disable
1954 certificate verification alltogether, in case some ssl library is
1955 missing or verification is not wanted. Use it with care, as even with
1956 self signed servers manually providing the private ca certificate is
1957 definitely the safer option.
1958
1960 Swift is the OpenStack Object Storage service.
1961 The backend requires python-switclient to be installed on the system.
1962 python-keystoneclient is also needed to use OpenStack's Keystone
1963 Identity service. See REQUIREMENTS.
1964
1965 It uses following environment variables for authentification:
1966 SWIFT_USERNAME (required), SWIFT_PASSWORD (required), SWIFT_AUTHURL
1967 (required), SWIFT_USERID (required, only for IBM Bluemix
1968 ObjectStorage), SWIFT_TENANTID (required, only for IBM Bluemix
1969 ObjectStorage), SWIFT_REGIONNAME (required, only for IBM Bluemix
1970 ObjectStorage), SWIFT_TENANTNAME (optional, the tenant can be included
1971 in the username)
1972
1973 If the user was previously authenticated, the following environment
1974 variables can be used instead: SWIFT_PREAUTHURL (required),
1975 SWIFT_PREAUTHTOKEN (required)
1976
1977 If SWIFT_AUTHVERSION is unspecified, it will default to version 1.
1978
1980 Signing and symmetrically encrypt at the same time with the gpg binary
1981 on the command line, as used within duplicity, is a specifically
1982 challenging issue. Tests showed that the following combinations proved
1983 working.
1984 1. Setup gpg-agent properly. Use the option --use-agent and enter both
1985 passphrases (symmetric and sign key) in the gpg-agent's dialog.
1986 2. Use a PASSPHRASE for symmetric encryption of your choice but the
1987 signing key has an empty passphrase.
1988 3. The used PASSPHRASE for symmetric encryption and the passphrase of
1989 the signing key are identical.
1990
1992 Hard links currently unsupported (they will be treated as non-linked
1993 regular files).
1994
1995 Bad signatures will be treated as empty instead of logging appropriate
1996 error message.
1997
1999 This section describes duplicity's basic operation and the format of
2000 its data files. It should not necessary to read this section to use
2001 duplicity.
2002
2003 The files used by duplicity to store backup data are tarfiles in GNU
2004 tar format. They can be produced independently by rdiffdir(1). For
2005 incremental backups, new files are saved normally in the tarfile. But
2006 when a file changes, instead of storing a complete copy of the file,
2007 only a diff is stored, as generated by rdiff(1). If a file is deleted,
2008 a 0 length file is stored in the tar. It is possible to restore a
2009 duplicity archive "manually" by using tar and then cp, rdiff, and rm as
2010 necessary. These duplicity archives have the extension difftar.
2011
2012 Both full and incremental backup sets have the same format. In effect,
2013 a full backup set is an incremental one generated from an empty
2014 signature (see below). The files in full backup sets will start with
2015 duplicity-full while the incremental sets start with duplicity-inc.
2016 When restoring, duplicity applies patches in order, so deleting, for
2017 instance, a full backup set may make related incremental backup sets
2018 unusable.
2019
2020 In order to determine which files have been deleted, and to calculate
2021 diffs for changed files, duplicity needs to process information about
2022 previous sessions. It stores this information in the form of tarfiles
2023 where each entry's data contains the signature (as produced by rdiff)
2024 of the file instead of the file's contents. These signature sets have
2025 the extension sigtar.
2026
2027 Signature files are not required to restore a backup set, but without
2028 an up-to-date signature, duplicity cannot append an incremental backup
2029 to an existing archive.
2030
2031 To save bandwidth, duplicity generates full signature sets and
2032 incremental signature sets. A full signature set is generated for each
2033 full backup, and an incremental one for each incremental backup. These
2034 start with duplicity-full-signatures and duplicity-new-signatures
2035 respectively. These signatures will be stored both locally and
2036 remotely. The remote signatures will be encrypted if encryption is
2037 enabled. The local signatures will not be encrypted and stored in the
2038 archive dir (see --archive-dir ).
2039
2041 Duplicity requires a POSIX-like operating system with a python
2042 interpreter version 2.6+ installed. It is best used under GNU/Linux.
2043
2044 Some backends also require additional components (probably available as
2045 packages for your specific platform):
2046 Amazon Drive backend
2047 python-requests - http://python-requests.org
2048 python-requests-oauthlib - https://github.com/requests/requests-
2049 oauthlib
2050 azure backend (Azure Storage Blob Service)
2051 Microsoft Azure Storage Blobs client library for Python -
2052 https://pypi.org/project/azure-storage-blob/
2053 boto backend (S3 Amazon Web Services, Google Cloud Storage) (legacy)
2054 boto version 2.49 (2018/07/11) - http://github.com/boto/boto
2055 boto3 backend (S3 Amazon Web Services, Google Cloud Storage) (default)
2056 boto3 version 1.x - https://github.com/boto/boto3
2057 box backend (box.com)
2058 boxsdk - https://github.com/box/box-python-sdk
2059 cfpyrax backend (Rackspace Cloud) and hubic backend (hubic.com)
2060 Rackspace CloudFiles Pyrax API -
2061 http://docs.rackspace.com/sdks/guide/content/python.html
2062 dpbx backend (Dropbox)
2063 Dropbox Python SDK -
2064 https://www.dropbox.com/developers/reference/sdk
2065 gdocs gdata backend (legacy)
2066 Google Data APIs Python Client Library -
2067 http://code.google.com/p/gdata-python-client/
2068 gdocs pydrive backend(default)
2069 see pydrive backend
2070 gio backend (Gnome VFS API)
2071 PyGObject - http://live.gnome.org/PyGObject
2072 D-Bus (dbus)- http://www.freedesktop.org/wiki/Software/dbus
2073 lftp backend (needed for ftp, ftps, fish [over ssh] - also supports
2074 sftp, webdav[s])
2075 LFTP Client - http://lftp.yar.ru/
2076 MEGA backend (only works for accounts created prior to November 2018)
2077 (mega.nz)
2078 megatools client - https://github.com/megous/megatools
2079 MEGA v2 and v3 backend (works for all MEGA accounts) (mega.nz)
2080 MEGAcmd client - https://mega.nz/cmd
2081 multi backend
2082 Multi -- store to more than one backend
2083 (also see A NOTE ON MULTI BACKEND ) below.
2084 ncftp backend (ftp, select via ncftp+ftp://)
2085 NcFTP - http://www.ncftp.com/
2086 OneDrive backend (Microsoft OneDrive)
2087 python-requests-oauthlib - https://github.com/requests/requests-
2088 oauthlib
2089 Par2 Wrapper Backend
2090 par2cmdline - http://parchive.sourceforge.net/
2091 pydrive backend
2092 PyDrive -- a wrapper library of google-api-python-client -
2093 https://pypi.python.org/pypi/PyDrive
2094 (also see A NOTE ON PYDRIVE BACKEND ) below.
2095 rclone backend
2096 rclone - https://rclone.org/
2097 rsync backend
2098 rsync client binary - http://rsync.samba.org/
2099 ssh paramiko backend (default)
2100 paramiko (SSH2 for python) -
2101 http://pypi.python.org/pypi/paramiko (downloads);
2102 http://github.com/paramiko/paramiko (project page)
2103 pycrypto (Python Cryptography Toolkit) -
2104 http://www.dlitz.net/software/pycrypto/
2105 ssh pexpect backend(legacy)
2106 sftp/scp client binaries OpenSSH - http://www.openssh.com/
2107 Python pexpect module -
2108 http://pexpect.sourceforge.net/pexpect.html
2109 swift backend (OpenStack Object Storage)
2110 Python swiftclient module - https://github.com/openstack/python-
2111 swiftclient/
2112 Python keystoneclient module -
2113 https://github.com/openstack/python-keystoneclient/
2114 webdav backend
2115 certificate authority database file for ssl certificate
2116 verification of HTTPS connections -
2117 http://curl.haxx.se/docs/caextract.html
2118 (also see A NOTE ON SSL CERTIFICATE VERIFICATION).
2119 Python kerberos module for kerberos authentication -
2120 https://github.com/02strich/pykerberos
2121 MediaFire backend
2122 MediaFire Python Open SDK -
2123 https://pypi.python.org/pypi/mediafire/
2124
2126 Original Author - Ben Escoto <bescoto@stanford.edu>
2127 Current Maintainer - Kenneth Loafman <kenneth@loafman.com>
2128 Continuous Contributors
2129 Edgar Soldin, Mike Terry
2130 Most backends were contributed individually. Information about their
2131 authorship may be found in the according file's header.
2132 Also we'd like to thank everybody posting issues to the mailing list or
2133 on launchpad, sending in patches or contributing otherwise. Duplicity
2134 wouldn't be as stable and useful if it weren't for you.
2135 A special thanks goes to rsync.net, a Cloud Storage provider with
2136 explicit support for duplicity, for several monetary donations and for
2137 providing a special "duplicity friends" rate for their offsite backup
2138 service. Email info@rsync.net for details.
2139
2141 rdiffdir(1), python(1), rdiff(1), rdiff-backup(1).
2142
2143
2144
2145Version 0.8.23 May 15, 2022 DUPLICITY(1)