1DUPLICITY(1) User Manuals DUPLICITY(1)
2
3
4
6 duplicity - Encrypted incremental backup to local or remote storage.
7
8
10 For detailed descriptions for each command see chapter ACTIONS.
11
12 duplicity [full|incremental] [options] source_directory target_url
13
14 duplicity verify [options] [--compare-data] [--file-to-restore
15 <relpath>] [--time time] source_url target_directory
16
17 duplicity collection-status [options] target_url
18
19 duplicity list-current-files [options] [--time time] target_url
20
21 duplicity [restore] [options] [--file-to-restore <relpath>] [--time
22 time] source_url target_directory
23
24 duplicity remove-older-than <time> [options] [--force] target_url
25
26 duplicity remove-all-but-n-full <count> [options] [--force] target_url
27
28 duplicity remove-all-inc-of-but-n-full <count> [options] [--force]
29 target_url
30
31 duplicity cleanup [options] [--force] [--extra-clean] target_url
32
33
35 Duplicity incrementally backs up files and folders into tar-format
36 volumes encrypted with GnuPG and places them to a remote (or local)
37 storage backend. See chapter URL FORMAT for a list of all supported
38 backends and how to address them. Because duplicity uses librsync,
39 incremental backups are space efficient and only record the parts of
40 files that have changed since the last backup. Currently duplicity
41 supports deleted files, full Unix permissions, uid/gid, directories,
42 symbolic links, fifos, etc., but not hard links.
43
44 If you are backing up the root directory /, remember to --exclude
45 /proc, or else duplicity will probably crash on the weird stuff in
46 there.
47
48
50 Here is an example of a backup, using sftp to back up /home/me to
51 some_dir on the other.host machine:
52
53 duplicity /home/me sftp://uid@other.host/some_dir
54
55 If the above is run repeatedly, the first will be a full backup, and
56 subsequent ones will be incremental. To force a full backup, use the
57 full action:
58
59 duplicity full /home/me sftp://uid@other.host/some_dir
60
61 or enforcing a full every other time via --full-if-older-than <time> ,
62 e.g. a full every month:
63
64 duplicity --full-if-older-than 1M /home/me
65 sftp://uid@other.host/some_dir
66
67 Now suppose we accidentally delete /home/me and want to restore it the
68 way it was at the time of last backup:
69
70 duplicity sftp://uid@other.host/some_dir /home/me
71
72 Duplicity enters restore mode because the URL comes before the local
73 directory. If we wanted to restore just the file "Mail/article" in
74 /home/me as it was three days ago into /home/me/restored_file:
75
76 duplicity -t 3D --file-to-restore Mail/article
77 sftp://uid@other.host/some_dir /home/me/restored_file
78
79 The following command compares the latest backup with the current
80 files:
81
82 duplicity verify sftp://uid@other.host/some_dir /home/me
83
84 Finally, duplicity recognizes several include/exclude options. For
85 instance, the following will backup the root directory, but exclude
86 /mnt, /tmp, and /proc:
87
88 duplicity --exclude /mnt --exclude /tmp --exclude /proc /
89 file:///usr/local/backup
90
91 Note that in this case the destination is the local directory
92 /usr/local/backup. The following will backup only the /home and /etc
93 directories under root:
94
95 duplicity --include /home --include /etc --exclude '**' /
96 file:///usr/local/backup
97
98 Duplicity can also access a repository via ftp. If a user name is
99 given, the environment variable FTP_PASSWORD is read to determine the
100 password:
101
102 FTP_PASSWORD=mypassword duplicity /local/dir
103 ftp://user@other.host/some_dir
104
105
107 Duplicity knows action commands, which can be finetuned with options.
108 The actions for backup (full,incr) and restoration (restore) can as
109 well be left out as duplicity detects in what mode it should switch to
110 by the order of target URL and local folder. If the target URL comes
111 before the local folder a restore is in order, is the local folder
112 before target URL then this folder is about to be backed up to the
113 target URL.
114 If a backup is in order and old signatures can be found duplicity
115 automatically performs an incremental backup.
116
117 Note: The following explanations explain some but not all options that
118 can be used in connection with that action command. Consult the
119 OPTIONS section for more detailed informations.
120
121
122 full <folder> <url>
123 Perform a full backup. A new backup chain is started even if
124 signatures are available for an incremental backup.
125
126
127 incr <folder> <url>
128 If this is requested an incremental backup will be performed.
129 Duplicity will abort if no old signatures can be found.
130
131
132 verify [--compare-data] [--time <time>] [--file-to-restore <rel_path>]
133 <url> <local_path>
134 Restore backup contents temporarily file by file and compare
135 against the local path's contents. duplicity will exit with a
136 non-zero error level if any files are different. On verbosity
137 level info (4) or higher, a message for each file that has
138 changed will be logged.
139 The --file-to-restore option restricts verify to that file or
140 folder. The --time option allows to select a backup to verify
141 against. The --compare-data option enables data comparison (see
142 below).
143
144
145 collection-status <url>
146 Summarize the status of the backup repository by printing the
147 chains and sets found, and the number of volumes in each.
148
149
150 list-current-files [--time <time>] <url>
151 Lists the files contained in the most current backup or backup
152 at time. The information will be extracted from the signature
153 files, not the archive data itself. Thus the whole archive does
154 not have to be downloaded, but on the other hand if the archive
155 has been deleted or corrupted, this command will not detect it.
156
157
158 restore [--file-to-restore <relpath>] [--time <time>] <url>
159 <target_folder>
160 You can restore the full monty or selected folders/files from a
161 specific time. Use the relative path as it is printed by list-
162 current-files. Usually not needed as duplicity enters restore
163 mode when it detects that the URL comes before the local folder.
164
165
166 remove-older-than <time> [--force] <url>
167 Delete all backup sets older than the given time. Old backup
168 sets will not be deleted if backup sets newer than time depend
169 on them. See the TIME FORMATS section for more information.
170 Note, this action cannot be combined with backup or other
171 actions, such as cleanup. Note also that --force will be needed
172 to delete the files instead of just listing them.
173
174
175 remove-all-but-n-full <count> [--force] <url>
176 Delete all backups sets that are older than the count:th last
177 full backup (in other words, keep the last count full backups
178 and associated incremental sets). count must be larger than
179 zero. A value of 1 means that only the single most recent backup
180 chain will be kept. Note that --force will be needed to delete
181 the files instead of just listing them.
182
183
184 remove-all-inc-of-but-n-full <count> [--force] <url>
185 Delete incremental sets of all backups sets that are older than
186 the count:th last full backup (in other words, keep only old
187 full backups and not their increments). count must be larger
188 than zero. A value of 1 means that only the single most recent
189 backup chain will be kept intact. Note that --force will be
190 needed to delete the files instead of just listing them.
191
192
193 cleanup [--force] [--extra-clean] <url>
194 Delete the extraneous duplicity files on the given backend.
195 Non-duplicity files, or files in complete data sets will not be
196 deleted. This should only be necessary after a duplicity
197 session fails or is aborted prematurely. Note that --force will
198 be needed to delete the files instead of just listing them.
199
200
202 --allow-source-mismatch
203 Do not abort on attempts to use the same archive dir or remote
204 backend to back up different directories. duplicity will tell
205 you if you need this switch.
206
207
208 --archive-dir path
209 The archive directory. NOTE: This option changed in 0.6.0. The
210 archive directory is now necessary in order to manage
211 persistence for current and future enhancements. As such, this
212 option is now used only to change the location of the archive
213 directory. The archive directory should not be deleted, or
214 duplicity will have to recreate it from the remote repository
215 (which may require decrypting the backup contents).
216
217 When backing up or restoring, this option specifies that the
218 local archive directory is to be created in path. If the
219 archive directory is not specified, the default will be to
220 create the archive directory in ~/.cache/duplicity/.
221
222 The archive directory can be shared between backups to multiple
223 targets, because a subdirectory of the archive dir is used for
224 individual backups (see --name ).
225
226 The combination of archive directory and backup name must be
227 unique in order to separate the data of different backups.
228
229 The interaction between the --archive-dir and the --name options
230 allows for four possible combinations for the location of the
231 archive dir:
232
233
234 1. neither specified (default)
235 ~/.cache/duplicity/hash-of-url
236
237 2. --archive-dir=/arch, no --name
238 /arch/hash-of-url
239
240 3. no --archive-dir, --name=foo
241 ~/.cache/duplicity/foo
242
243 4. --archive-dir=/arch, --name=foo
244 /arch/foo
245
246
247 --asynchronous-upload
248 (EXPERIMENTAL) Perform file uploads asynchronously in the
249 background, with respect to volume creation. This means that
250 duplicity can upload a volume while, at the same time, preparing
251 the next volume for upload. The intended end-result is a faster
252 backup, because the local CPU and your bandwidth can be more
253 consistently utilized. Use of this option implies additional
254 need for disk space in the temporary storage location; rather
255 than needing to store only one volume at a time, enough storage
256 space is required to store two volumes.
257
258
259 --backend-retry-delay number
260 Specifies the number of seconds that duplicity waits after an
261 error has occured before attempting to repeat the operation.
262
263
264
265 --cf-backend backend
266 Allows the explicit selection of a cloudfiles backend. Defaults
267 to pyrax. Alternatively you might choose cloudfiles.
268
269
270 --compare-data
271 Enable data comparison of regular files on action verify. This
272 is disabled by default for performance reasons.
273
274
275 --copy-links
276 Resolve symlinks during backup. Enabling this will resolve &
277 back up the symlink's file/folder data instead of the symlink
278 itself, potentially increasing the size of the backup.
279
280
281 --dry-run
282 Calculate what would be done, but do not perform any backend
283 actions
284
285
286 --encrypt-key key-id
287 When backing up, encrypt to the given public key, instead of
288 using symmetric (traditional) encryption. Can be specified
289 multiple times. The key-id can be given in any of the formats
290 supported by GnuPG; see gpg(1), section "HOW TO SPECIFY A USER
291 ID" for details.
292
293
294
295 --encrypt-secret-keyring filename
296 This option can only be used with --encrypt-key, and changes the
297 path to the secret keyring for the encrypt key to filename This
298 keyring is not used when creating a backup. If not specified,
299 the default secret keyring is used which is usually located at
300 .gnupg/secring.gpg
301
302
303 --encrypt-sign-key key-id
304 Convenience parameter. Same as --encrypt-key key-id --sign-key
305 key-id.
306
307
308 --exclude shell_pattern
309 Exclude the file or files matched by shell_pattern. If a
310 directory is matched, then files under that directory will also
311 be matched. See the FILE SELECTION section for more
312 information.
313
314
315 --exclude-device-files
316 Exclude all device files. This can be useful for
317 security/permissions reasons or if rdiff-backup is not handling
318 device files correctly.
319
320
321 --exclude-filelist filename
322 Excludes the files listed in filename, with each line of the
323 filelist interpreted according to the same rules as --include
324 and --exclude. See the FILE SELECTION section for more
325 information.
326
327
328 --exclude-if-present filename
329 Exclude directories if filename is present. Allows the user to
330 specify folders that they do not wish to backup by adding a
331 specified file (e.g. ".nobackup") instead of maintaining a
332 comprehensive exclude/include list. This option needs to come
333 before any other include or exclude options.
334
335
336 --exclude-older-than time
337 Exclude any files whose modification date is earlier than the
338 specified time. This can be used to produce a partial backup
339 that contains only recently changed files. See the TIME FORMATS
340 section for more information.
341
342
343 --exclude-other-filesystems
344 Exclude files on file systems (identified by device number)
345 other than the file system the root of the source directory is
346 on.
347
348
349 --exclude-regexp regexp
350 Exclude files matching the given regexp. Unlike the --exclude
351 option, this option does not match files in a directory it
352 matches. See the FILE SELECTION section for more information.
353
354
355 --extra-clean
356 When cleaning up, be more aggressive about saving space. For
357 example, this may delete signature files for old backup chains.
358
359 Caution: Without signature files those old backup chains are
360 unrestorable. Do not use --extra-clean unless you know what
361 you're doing.
362
363 See the cleanup argument for more information.
364
365
366 --file-prefix, --file-prefix-manifest, --file-prefix-archive, --file-
367 prefix-signature
368 Adds a prefix to all files, manifest files, archive files,
369 and/or signature files.
370
371 The same set of prefixes must be passed in on backup and
372 restore.
373
374 If both global and type-specific prefixes are set, global prefix
375 will go before type-specific prefixes.
376
377 See also A NOTE ON FILENAME PREFIXES
378
379
380 --file-to-restore path
381 This option may be given in restore mode, causing only path to
382 be restored instead of the entire contents of the backup
383 archive. path should be given relative to the root of the
384 directory backed up.
385
386
387 --full-if-older-than time
388 Perform a full backup if an incremental backup is requested, but
389 the latest full backup in the collection is older than the given
390 time. See the TIME FORMATS section for more information.
391
392
393 --force
394 Proceed even if data loss might result. Duplicity will let the
395 user know when this option is required.
396
397
398 --ftp-passive
399 Use passive (PASV) data connections. The default is to use
400 passive, but to fallback to regular if the passive connection
401 fails or times out.
402
403
404 --ftp-regular
405 Use regular (PORT) data connections.
406
407
408 --gio Use the GIO backend and interpret any URLs as GIO would.
409
410
411 --hidden-encrypt-key key-id
412 Same as --encrypt-key, but it hides user's key id from encrypted
413 file. It uses the gpg's --hidden-recipient command to obfuscate
414 the owner of the backup. On restore, gpg will automatically try
415 all available secret keys in order to decrypt the backup. See
416 gpg(1) for more details.
417
418
419
420 --ignore-errors
421 Try to ignore certain errors if they happen. This option is only
422 intended to allow the restoration of a backup in the face of
423 certain problems that would otherwise cause the backup to fail.
424 It is not ever recommended to use this option unless you have a
425 situation where you are trying to restore from backup and it is
426 failing because of an issue which you want duplicity to ignore.
427 Even then, depending on the issue, this option may not have an
428 effect.
429
430 Please note that while ignored errors will be logged, there will
431 be no summary at the end of the operation to tell you what was
432 ignored, if anything. If this is used for emergency restoration
433 of data, it is recommended that you run the backup in such a way
434 that you can revisit the backup log (look for lines containing
435 the string IGNORED_ERROR).
436
437 If you ever have to use this option for reasons that are not
438 understood or understood but not your own responsibility, please
439 contact duplicity maintainers. The need to use this option under
440 production circumstances would normally be considered a bug.
441
442
443 --imap-full-address email_address
444 The full email address of the user name when logging into an
445 imap server. If not supplied just the user name part of the
446 email address is used.
447
448
449 --imap-mailbox option
450 Allows you to specify a different mailbox. The default is
451 "INBOX". Other languages may require a different mailbox than
452 the default.
453
454
455 --gpg-binary file_path
456 Allows you to force duplicity to use file_path as gpg command
457 line binary. Can be an absolute or relative file path or a file
458 name. Default value is 'gpg'. The binary will be localized via
459 the PATH environment variable.
460
461
462 --gpg-options options
463 Allows you to pass options to gpg encryption. The options list
464 should be of the form "--opt1 --opt2=parm" where the string is
465 quoted and the only spaces allowed are between options.
466
467
468 --include shell_pattern
469 Similar to --exclude but include matched files instead. Unlike
470 --exclude, this option will also match parent directories of
471 matched files (although not necessarily their contents). See
472 the FILE SELECTION section for more information.
473
474
475 --include-filelist filename
476 Like --exclude-filelist, but include the listed files instead.
477 See the FILE SELECTION section for more information.
478
479
480 --include-regexp regexp
481 Include files matching the regular expression regexp. Only
482 files explicitly matched by regexp will be included by this
483 option. See the FILE SELECTION section for more information.
484
485
486 --log-fd number
487 Write specially-formatted versions of output messages to the
488 specified file descriptor. The format used is designed to be
489 easily consumable by other programs.
490
491
492 --log-file filename
493 Write specially-formatted versions of output messages to the
494 specified file. The format used is designed to be easily
495 consumable by other programs.
496
497
498 --max-blocksize number
499 determines the number of the blocks examined for changes during
500 the diff process. For files < 1MB the blocksize is a constant
501 of 512. For files over 1MB the size is given by:
502
503 file_blocksize = int((file_len / (2000 * 512)) * 512)
504 return min(file_blocksize, globals.max_blocksize)
505
506 where globals.max_blocksize defaults to 2048. If you specify a
507 larger max_blocksize, your difftar files will be larger, but
508 your sigtar files will be smaller. If you specify a smaller
509 max_blocksize, the reverse occurs. The --max-blocksize option
510 should be in multiples of 512.
511
512
513 --name symbolicname
514 Set the symbolic name of the backup being operated on. The
515 intent is to use a separate name for each logically distinct
516 backup. For example, someone may use "home_daily_s3" for the
517 daily backup of a home directory to Amazon S3. The structure of
518 the name is up to the user, it is only important that the names
519 be distinct. The symbolic name is currently only used to affect
520 the expansion of --archive-dir , but may be used for additional
521 features in the future. Users running more than one distinct
522 backup are encouraged to use this option.
523
524 If not specified, the default value is a hash of the backend
525 URL.
526
527
528 --no-compression
529 Do not use GZip to compress files on remote system.
530
531
532 --no-encryption
533 Do not use GnuPG to encrypt files on remote system.
534
535
536 --no-print-statistics
537 By default duplicity will print statistics about the current
538 session after a successful backup. This switch disables that
539 behavior.
540
541
542 --null-separator
543 Use nulls (\0) instead of newlines (\n) as line separators,
544 which may help when dealing with filenames containing newlines.
545 This affects the expected format of the files specified by the
546 --{include|exclude}-filelist switches as well as the format of
547 the directory statistics file.
548
549
550 --numeric-owner
551 On restore always use the numeric uid/gid from the archive and
552 not the archived user/group names, which is the default
553 behaviour. Recommended for restoring from live cds which might
554 have the users with identical names but different uids/gids.
555
556
557 --num-retries number
558 Number of retries to make on errors before giving up.
559
560
561 --old-filenames
562 Use the old filename format (incompatible with Windows/Samba)
563 rather than the new filename format.
564
565
566 --par2-options options
567 Verbatim options to pass to par2.
568
569
570 --par2-redundancy percent
571 Adjust the level of redundancy in percent for Par2 recovery
572 files (default 10%).
573
574
575 --progress
576 When selected, duplicity will output the current upload progress
577 and estimated upload time. To annotate changes, it will perform
578 a first dry-run before a full or incremental, and then runs the
579 real operation estimating the real upload progress.
580
581
582 --progress-rate number
583 Sets the update rate at which duplicity will output the upload
584 progress messages (requires --progress option). Default is to
585 prompt the status each 3 seconds.
586
587
588 --rename <original path> <new path>
589 Treats the path orig in the backup as if it were the path new.
590 Can be passed multiple times. An example:
591
592 duplicity restore --rename Documents/metal Music/metal
593 sftp://uid@other.host/some_dir /home/me
594
595
596 --rsync-options options
597 Allows you to pass options to the rsync backend. The options
598 list should be of the form "opt1=parm1 opt2=parm2" where the
599 option string is quoted and the only spaces allowed are between
600 options. The option string will be passed verbatim to rsync,
601 after any internally generated option designating the remote
602 port to use. Here is a possibly useful example:
603
604 duplicity --rsync-options="--partial-dir=.rsync-partial"
605 /home/me rsync://uid@other.host/some_dir
606
607
608 --s3-european-buckets
609 When using the Amazon S3 backend, create buckets in Europe
610 instead of the default (requires --s3-use-new-style ). Also see
611 the EUROPEAN S3 BUCKETS section.
612
613
614 --s3-unencrypted-connection
615 Don't use SSL for connections to S3.
616
617 This may be much faster, at some cost to confidentiality.
618
619 With this option, anyone who can observe traffic between your
620 computer and S3 will be able to tell: that you are using
621 Duplicity, the name of the bucket, your AWS Access Key ID, the
622 increment dates and the amount of data in each increment.
623
624 This option affects only the connection, not the GPG encryption
625 of the backup increment files. Unless that is disabled, an
626 observer will not be able to see the file names or contents.
627
628
629 --s3-use-new-style
630 When operating on Amazon S3 buckets, use new-style subdomain
631 bucket addressing. This is now the preferred method to access
632 Amazon S3, but is not backwards compatible if your bucket name
633 contains upper-case characters or other characters that are not
634 valid in a hostname.
635
636
637 --s3-use-rrs
638 Store volumes using Reduced Redundancy Storage when uploading to
639 Amazon S3. This will lower the cost of storage but also lower
640 the durability of stored volumes to 99.99% instead the
641 99.999999999% durability offered by Standard Storage on S3.
642
643
644 --s3-use-ia
645 Store volumes using Standard - Infrequent Access when uploading
646 to Amazon S3. This storage class has a lower storage cost but a
647 higher per-request cost, and the storage cost is calculated
648 against a 30-day storage minimum. According to Amazon, this
649 storage is ideal for long-term file storage, backups, and
650 disaster recovery.
651
652
653 --s3-use-multiprocessing
654 Allow multipart volumne uploads to S3 through multiprocessing.
655 This option requires Python 2.6 and can be used to make uploads
656 to S3 more efficient. If enabled, files duplicity uploads to S3
657 will be split into chunks and uploaded in parallel. Useful if
658 you want to saturate your bandwidth or if large files are
659 failing during upload.
660
661
662 --s3-use-server-side-encryption
663 Allow use of server side encryption in S3
664
665
666 --s3-multipart-chunk-size
667 Chunk size (in MB) used for S3 multipart uploads. Make this
668 smaller than --volsize to maximize the use of your bandwidth.
669 For example, a chunk size of 10MB with a volsize of 30MB will
670 result in 3 chunks per volume upload.
671
672
673 --s3-multipart-max-procs
674 Specify the maximum number of processes to spawn when performing
675 a multipart upload to S3. By default, this will choose the
676 number of processors detected on your system (e.g. 4 for a
677 4-core system). You can adjust this number as required to ensure
678 you don't overload your system while maximizing the use of your
679 bandwidth.
680
681
682 --s3-multipart-max-timeout
683 You can control the maximum time (in seconds) a multipart upload
684 can spend on uploading a single chunk to S3. This may be useful
685 if you find your system hanging on multipart uploads or if you'd
686 like to control the time variance when uploading to S3 to ensure
687 you kill connections to slow S3 endpoints.
688
689
690 --scp-command command
691 (only ssh pexpect backend with --use-scp enabled) The command
692 will be used instead of "scp" to send or receive files. To list
693 and delete existing files, the sftp command is used.
694 See also A NOTE ON SSH BACKENDS section SSH pexpect backend.
695
696
697 --sftp-command command
698 (only ssh pexpect backend) The command will be used instead of
699 "sftp".
700 See also A NOTE ON SSH BACKENDS section SSH pexpect backend.
701
702
703 --short-filenames
704 If this option is specified, the names of the files duplicity
705 writes will be shorter (about 30 chars) but less understandable.
706 This may be useful when backing up to MacOS or another OS or FS
707 that doesn't support long filenames.
708
709
710 --sign-key key-id
711 This option can be used when backing up, restoring or verifying.
712 When backing up, all backup files will be signed with keyid key.
713 When restoring, duplicity will signal an error if any remote
714 file is not signed with the given key-id. The key-id can be
715 given in any of the formats supported by GnuPG; see gpg(1),
716 section "HOW TO SPECIFY A USER ID" for details. Should be
717 specified only once because currently only one signing key is
718 supported. Last entry overrides all other entries.
719 See also A NOTE ON SYMMETRIC ENCRYPTION AND SIGNING
720
721
722 --ssh-askpass
723 Tells the ssh backend to prompt the user for the remote system
724 password, if it was not defined in target url and no
725 FTP_PASSWORD env var is set. This password is also used for
726 passphrase-protected ssh keys.
727
728
729 --ssh-options options
730 Allows you to pass options to the ssh backend. Can be specified
731 multiple times or as a space separated options list. The
732 options list should be of the form "-oOpt1='parm1'
733 -oOpt2='parm2'" where the option string is quoted and the only
734 spaces allowed are between options. The option string will be
735 passed verbatim to both scp and sftp, whose command line syntax
736 differs slightly hence the options should therefore be given in
737 the long option format described in ssh_config(5).
738
739 example of a list:
740
741 duplicity --ssh-options="-oProtocol=2
742 -oIdentityFile='/my/backup/id'" /home/me
743 scp://user@host/some_dir
744
745 example with multiple parameters:
746
747 duplicity --ssh-options="-oProtocol=2" --ssh-
748 options="-oIdentityFile='/my/backup/id'" /home/me
749 scp://user@host/some_dir
750
751 NOTE: The ssh paramiko backend currently supports only the -i or
752 -oIdentityFile setting. If needed provide more host specific
753 options via ssh_config file.
754
755
756 --ssl-cacert-file file
757 (only webdav & lftp backend) Provide a cacert file for ssl
758 certificate verification.
759 See also A NOTE ON SSL CERTIFICATE VERIFICATION.
760
761
762 --ssl-cacert-path path/to/certs/
763 (only webdav backend and python 2.7.9+ OR lftp+webdavs and a
764 recent lftp) Provide a path to a folder containing cacert files
765 for ssl certificate verification.
766 See also A NOTE ON SSL CERTIFICATE VERIFICATION.
767
768
769 --ssl-no-check-certificate
770 (only webdav & lftp backend) Disable ssl certificate
771 verification.
772 See also A NOTE ON SSL CERTIFICATE VERIFICATION.
773
774
775 --metadata-sync-mode mode
776 This option defaults to 'full', but you can set it to 'partial'
777 to avoid syncing metadata for backup chains that you are not
778 going to use. This saves time when restoring for the first
779 time, and lets you restore an old backup that was encrypted with
780 a different passphrase by supplying only the target passphrase.
781
782
783 --tempdir directory
784 Use this existing directory for duplicity temporary files
785 instead of the system default, which is usually the /tmp
786 directory. This option supersedes any environment variable.
787 See also ENVIRONMENT VARIABLES.
788
789
790 -ttime, --time time, --restore-time time
791 Specify the time from which to restore or list files.
792
793
794 --time-separator char
795 Use char as the time separator in filenames instead of colon
796 (":").
797
798
799 --timeout seconds
800 Use seconds as the socket timeout value if duplicity begins to
801 timeout during network operations. The default is 30 seconds.
802
803
804 --use-agent
805 If this option is specified, then --use-agent is passed to the
806 GnuPG encryption process and it will try to connect to gpg-agent
807 before it asks for a passphrase for --encrypt-key or --sign-key
808 if needed.
809 Note: Contrary to previous versions of duplicity, this option
810 will also be honored by GnuPG 2 and newer versions. If GnuPG 2
811 is in use, duplicity passes the option --pinentry-mode=loopback
812 to the the gpg process unless --use-agent is specified on the
813 duplicity command line. This has the effect that GnuPG 2 uses
814 the agent only if --use-agent is given, just like GnuPG 1.
815
816
817 --verbosity level, -vlevel
818 Specify output verbosity level (log level). Named levels and
819 corresponding values are 0 Error, 2 Warning, 4 Notice (default),
820 8 Info, 9 Debug (noisiest).
821 level may also be
822 a character: e, w, n, i, d
823 a word: error, warning, notice, info, debug
824
825 The options -v4, -vn and -vnotice are functionally equivalent,
826 as are the mixed/upper-case versions -vN, -vNotice and -vNOTICE.
827
828
829 --version
830 Print duplicity's version and quit.
831
832
833 --volsize number
834 Change the volume size to number MB. Default is 200MB.
835
836
838 TMPDIR, TEMP, TMP
839 In decreasing order of importance, specifies the directory to
840 use for temporary files (inherited from Python's tempfile
841 module). Eventually the option --tempdir supercedes any of
842 these.
843
844 FTP_PASSWORD
845 Supported by most backends which are password capable. More
846 secure than setting it in the backend url (which might be
847 readable in the operating systems process listing to other users
848 on the same machine).
849
850 PASSPHRASE
851 This passphrase is passed to GnuPG. If this is not set, the user
852 will be prompted for the passphrase.
853
854 SIGN_PASSPHRASE
855 The passphrase to be used for --sign-key. If ommitted and sign
856 key is also one of the keys to encrypt against PASSPHRASE will
857 be reused instead. Otherwise, if passphrase is needed but not
858 set the user will be prompted for it.
859
860
862 Duplicity uses the URL format (as standard as possible) to define data
863 locations. The generic format for a URL is:
864
865 scheme://[user[:password]@]host[:port]/[/]path
866
867 It is not recommended to expose the password on the command line since
868 it could be revealed to anyone with permissions to do process listings,
869 it is permitted however. Consider setting the environment variable
870 FTP_PASSWORD instead, which is used by most, if not all backends,
871 regardless of it's name.
872
873 In protocols that support it, the path may be preceded by a single
874 slash, '/path', to represent a relative path to the target home
875 directory, or preceded by a double slash, '//path', to represent an
876 absolute filesystem path.
877
878 Note:
879 Scheme (protocol) access may be provided by more than one
880 backend. In case the default backend is buggy or simply not
881 working in a specific case it might be worth trying an
882 alternative implementation. Alternative backends can be
883 selected by prefixing the scheme with the name of the
884 alternative backend e.g. ncftp+ftp:// and are mentioned below
885 the scheme's syntax summary.
886
887
888 Formats of each of the URL schemes follow:
889
890
891 Azure
892
893 azure://container-name
894
895 See also A NOTE ON AZURE ACCESS
896
897 B2
898
899 b2://account_id[:application_key]@bucket_name/[folder/]
900
901 Cloud Files (Rackspace)
902
903 cf+http://container_name
904
905 See also A NOTE ON CLOUD FILES ACCESS
906
907 Dropbox
908
909 dpbx:///some_dir
910
911 Make sure to read A NOTE ON DROPBOX ACCESS first!
912
913 Local file path
914
915 file://[relative|/absolute]/local/path
916
917 FISH (Files transferred over Shell protocol) over ssh
918
919 fish://user[:password]@other.host[:port]/[relative|/absolute]_path
920
921 FTP
922
923 ftp[s]://user[:password]@other.host[:port]/some_dir
924
925 NOTE: use lftp+, ncftp+ prefixes to enforce a specific backend,
926 default is lftp+ftp://...
927
928 Google Docs
929
930 gdocs://user[:password]@other.host/some_dir
931
932 NOTE: use pydrive+, gdata+ prefixes to enforce a specific
933 backend, default is pydrive+gdocs://...
934
935 Google Cloud Storage
936
937 gs://bucket[/prefix]
938
939 HSI
940
941 hsi://user[:password]@other.host/some_dir
942
943 hubiC
944
945 cf+hubic://container_name
946
947 See also A NOTE ON HUBIC
948
949 IMAP email storage
950
951 imap[s]://user[:password]@host.com[/from_address_prefix]
952
953 See also A NOTE ON IMAP
954
955 Mega cloud storage
956
957 mega://user[:password]@mega.co.nz/some_dir
958
959 OneDrive Backend
960
961 onedrive://some_dir
962
963 Par2 Wrapper Backend
964
965 par2+scheme://[user[:password]@]host[:port]/[/]path
966
967 See also A NOTE ON PAR2 WRAPPER BACKEND
968
969 Rsync via daemon
970
971 rsync://user[:password]@host.com[:port]::[/]module/some_dir
972
973 Rsync over ssh (only key auth)
974
975 rsync://user@host.com[:port]/[relative|/absolute]_path
976
977 S3 storage (Amazon)
978
979 s3://host[:port]/bucket_name[/prefix]
980 s3+http://bucket_name[/prefix]
981
982 See also A NOTE ON EUROPEAN S3 BUCKETS
983
984 SCP/SFTP access
985
986 scp://.. or
987 sftp://user[:password]@other.host[:port]/[relative|/absolute]_path
988
989 defaults are paramiko+scp:// and paramiko+sftp://
990 alternatively try pexpect+scp://, pexpect+sftp://, lftp+sftp://
991 See also --ssh-askpass, --ssh-options and A NOTE ON SSH
992 BACKENDS.
993
994 Swift (Openstack)
995
996 swift://container_name[/prefix]
997
998 See also A NOTE ON SWIFT (OPENSTACK OBJECT STORAGE) ACCESS
999
1000 Tahoe-LAFS
1001
1002 tahoe://alias/directory
1003
1004 WebDAV
1005
1006 webdav[s]://user[:password]@other.host[:port]/some_dir
1007
1008 alternatively try lftp+webdav[s]://
1009
1010 pydrive
1011
1012 pydrive://<service account' email
1013 address>@developer.gserviceaccount.com/some_dir
1014
1015 See also A NOTE ON PYDRIVE BACKEND below.
1016
1017 multi
1018
1019 multi:///path/to/config.json
1020
1021 See also A NOTE ON MULTI BACKEND below.
1022
1023 MediaFire
1024
1025 mf://user[:password]@mediafire.com/some_dir
1026
1027 See also A NOTE ON MEDIAFIRE BACKEND below.
1028
1029
1031 duplicity uses time strings in two places. Firstly, many of the files
1032 duplicity creates will have the time in their filenames in the w3
1033 datetime format as described in a w3 note at http://www.w3.org/TR/NOTE-
1034 datetime. Basically they look like "2001-07-15T04:09:38-07:00", which
1035 means what it looks like. The "-07:00" section means the time zone is
1036 7 hours behind UTC.
1037
1038 Secondly, the -t, --time, and --restore-time options take a time
1039 string, which can be given in any of several formats:
1040
1041 1. the string "now" (refers to the current time)
1042
1043 2. a sequences of digits, like "123456890" (indicating the time in
1044 seconds after the epoch)
1045
1046 3. A string like "2002-01-25T07:00:00+02:00" in datetime format
1047
1048 4. An interval, which is a number followed by one of the characters
1049 s, m, h, D, W, M, or Y (indicating seconds, minutes, hours,
1050 days, weeks, months, or years respectively), or a series of such
1051 pairs. In this case the string refers to the time that preceded
1052 the current time by the length of the interval. For instance,
1053 "1h78m" indicates the time that was one hour and 78 minutes ago.
1054 The calendar here is unsophisticated: a month is always 30 days,
1055 a year is always 365 days, and a day is always 86400 seconds.
1056
1057 5. A date format of the form YYYY/MM/DD, YYYY-MM-DD, MM/DD/YYYY, or
1058 MM-DD-YYYY, which indicates midnight on the day in question,
1059 relative to the current time zone settings. For instance,
1060 "2002/3/5", "03-05-2002", and "2002-3-05" all mean March 5th,
1061 2002.
1062
1063
1065 When duplicity is run, it searches through the given source directory
1066 and backs up all the files specified by the file selection system. The
1067 file selection system comprises a number of file selection conditions,
1068 which are set using one of the following command line options:
1069 --exclude
1070 --exclude-device-files
1071 --exclude-filelist
1072 --exclude-regexp
1073 --include
1074 --include-filelist
1075 --include-regexp
1076 Each file selection condition either matches or doesn't match a given
1077 file. A given file is excluded by the file selection system exactly
1078 when the first matching file selection condition specifies that the
1079 file be excluded; otherwise the file is included.
1080
1081 For instance,
1082
1083 duplicity --include /usr --exclude /usr /usr
1084 scp://user@host/backup
1085
1086 is exactly the same as
1087
1088 duplicity /usr scp://user@host/backup
1089
1090 because the include and exclude directives match exactly the same
1091 files, and the --include comes first, giving it precedence. Similarly,
1092
1093 duplicity --include /usr/local/bin --exclude /usr/local /usr
1094 scp://user@host/backup
1095
1096 would backup the /usr/local/bin directory (and its contents), but not
1097 /usr/local/doc.
1098
1099 The include, exclude, include-filelist, and exclude-filelist options
1100 accept some extended shell globbing patterns. These patterns can
1101 contain *, **, ?, and [...] (character ranges). As in a normal shell,
1102 * can be expanded to any string of characters not containing "/", ?
1103 expands to any character except "/", and [...] expands to a single
1104 character of those characters specified (ranges are acceptable). The
1105 new special pattern, **, expands to any string of characters whether or
1106 not it contains "/". Furthermore, if the pattern starts with
1107 "ignorecase:" (case insensitive), then this prefix will be removed and
1108 any character in the string can be replaced with an upper- or lowercase
1109 version of itself.
1110
1111 Remember that you may need to quote these characters when typing them
1112 into a shell, so the shell does not interpret the globbing patterns
1113 before duplicity sees them.
1114
1115 The --exclude pattern option matches a file if:
1116
1117 1. pattern can be expanded into the file's filename, or
1118 2. the file is inside a directory matched by the option.
1119
1120 Conversely, the --include pattern matches a file if:
1121
1122 1. pattern can be expanded into the file's filename, or
1123 2. the file is inside a directory matched by the option, or
1124 3. the file is a directory which contains a file matched by the
1125 option.
1126
1127 For example,
1128
1129 --exclude /usr/local
1130
1131 matches e.g. /usr/local, /usr/local/lib, and /usr/local/lib/netscape.
1132 It is the same as --exclude /usr/local --exclude '/usr/local/**'.
1133
1134 On the other hand
1135
1136 --include /usr/local
1137
1138 specifies that /usr, /usr/local, /usr/local/lib, and
1139 /usr/local/lib/netscape (but not /usr/doc) all be backed up. Thus you
1140 don't have to worry about including parent directories to make sure
1141 that included subdirectories have somewhere to go.
1142
1143 Finally,
1144
1145 --include ignorecase:'/usr/[a-z0-9]foo/*/**.py'
1146
1147 would match a file like /usR/5fOO/hello/there/world.py. If it did
1148 match anything, it would also match /usr. If there is no existing file
1149 that the given pattern can be expanded into, the option will not match
1150 /usr alone.
1151
1152 The --include-filelist, and --exclude-filelist, options also introduce
1153 file selection conditions. They direct duplicity to read in a file,
1154 each line of which is a file specification, and to include or exclude
1155 the matching files. Lines are separated by newlines or nulls,
1156 depending on whether the --null-separator switch was given. Each line
1157 in the filelist will be interpreted as a globbing pattern the way
1158 --include and --exclude options are interpreted, except that lines
1159 starting with "+ " are interpreted as include directives, even if found
1160 in a filelist referenced by --exclude-filelist. Similarly, lines
1161 starting with "- " exclude files even if they are found within an
1162 include filelist.
1163
1164 For example, if file "list.txt" contains the lines:
1165
1166 /usr/local
1167 - /usr/local/doc
1168 /usr/local/bin
1169 + /var
1170 - /var
1171
1172 then --include-filelist list.txt would include /usr, /usr/local, and
1173 /usr/local/bin. It would exclude /usr/local/doc,
1174 /usr/local/doc/python, etc. It would also include /usr/local/man, as
1175 this is included within /user/local. Finally, it is undefined what
1176 happens with /var. A single file list should not contain conflicting
1177 file specifications.
1178
1179 Each line in the filelist will also be interpreted as a globbing
1180 pattern the way --include and --exclude options are interpreted. For
1181 instance, if the file "list.txt" contains the lines:
1182
1183 dir/foo
1184 + dir/bar
1185 - **
1186
1187 Then --include-filelist list.txt would be exactly the same as
1188 specifying --include dir/foo --include dir/bar --exclude ** on the
1189 command line.
1190
1191 Finally, the --include-regexp and --exclude-regexp options allow files
1192 to be included and excluded if their filenames match a python regular
1193 expression. Regular expression syntax is too complicated to explain
1194 here, but is covered in Python's library reference. Unlike the
1195 --include and --exclude options, the regular expression options don't
1196 match files containing or contained in matched files. So for instance
1197
1198 --include '[0-9]{7}(?!foo)'
1199
1200 matches any files whose full pathnames contain 7 consecutive digits
1201 which aren't followed by 'foo'. However, it wouldn't match /home even
1202 if /home/ben/1234567 existed.
1203
1204
1206 The Azure backend requires the Microsoft Azure Storage SDK for Python
1207 to be installed on the system. See REQUIREMENTS above.
1208
1209 It uses two environment variables for authentification:
1210 AZURE_ACCOUNT_NAME (required), AZURE_ACCOUNT_KEY (required)
1211
1212 A container name must be a valid DNS name, conforming to the following
1213 naming rules:
1214
1215
1216 1. Container names must start with a letter or number, and
1217 can contain only letters, numbers, and the dash (-)
1218 character.
1219
1220 2. Every dash (-) character must be immediately preceded and
1221 followed by a letter or number; consecutive dashes are
1222 not permitted in container names.
1223
1224 3. All letters in a container name must be lowercase.
1225
1226 4. Container names must be from 3 through 63 characters
1227 long.
1228
1229
1231 Pyrax is Rackspace's next-generation Cloud management API, including
1232 Cloud Files access. The cfpyrax backend requires the pyrax library to
1233 be installed on the system. See REQUIREMENTS above.
1234
1235 Cloudfiles is Rackspace's now deprecated implementation of OpenStack
1236 Object Storage protocol. Users wishing to use Duplicity with Rackspace
1237 Cloud Files should migrate to the new Pyrax plugin to ensure support.
1238
1239 The backend requires python-cloudfiles to be installed on the system.
1240 See REQUIREMENTS above.
1241
1242 It uses three environment variables for authentification:
1243 CLOUDFILES_USERNAME (required), CLOUDFILES_APIKEY (required),
1244 CLOUDFILES_AUTHURL (optional)
1245
1246 If CLOUDFILES_AUTHURL is unspecified it will default to the value
1247 provided by python-cloudfiles, which points to rackspace, hence this
1248 value must be set in order to use other cloud files providers.
1249
1250
1252 1. First of all Dropbox backend requires valid authentication
1253 token. It should be passed via DPBX_ACCESS_TOKEN environment
1254 variable.
1255 To obtain it please create 'Dropbox API' application at:
1256 https://www.dropbox.com/developers/apps/create
1257 Then visit app settings and just use 'Generated access token'
1258 under OAuth2 section.
1259 Alternatively you can let duplicity generate access token
1260 itself. In such case temporary export DPBX_APP_KEY ,
1261 DPBX_APP_SECRET using values from app settings page and run
1262 duplicity interactively.
1263 It will print the URL that you need to open in the browser to
1264 obtain OAuth2 token for the application. Just follow on-screen
1265 instructions and then put generated token to DPBX_ACCESS_TOKEN
1266 variable. Once done, feel free to unset DPBX_APP_KEY and
1267 DPBX_APP_SECRET
1268
1269
1270 2. "some_dir" must already exist in the Dropbox folder. Depending
1271 on access token kind it may be:
1272 Full Dropbox: path is absolute and starts from 'Dropbox'
1273 root folder.
1274 App Folder: path is related to application folder.
1275 Dropbox client will show it in ~/Dropbox/Apps/<app-name>
1276
1277
1278 3. When using Dropbox for storage, be aware that all files,
1279 including the ones in the Apps folder, will be synced to all
1280 connected computers. You may prefer to use a separate Dropbox
1281 account specially for the backups, and not connect any computers
1282 to that account. Alternatively you can configure selective sync
1283 on all computers to avoid syncing of backup files
1284
1285
1287 Amazon S3 provides the ability to choose the location of a bucket upon
1288 its creation. The purpose is to enable the user to choose a location
1289 which is better located network topologically relative to the user,
1290 because it may allow for faster data transfers.
1291
1292 duplicity will create a new bucket the first time a bucket access is
1293 attempted. At this point, the bucket will be created in Europe if
1294 --s3-european-buckets was given. For reasons having to do with how the
1295 Amazon S3 service works, this also requires the use of the --s3-use-
1296 new-style option. This option turns on subdomain based bucket
1297 addressing in S3. The details are beyond the scope of this man page,
1298 but it is important to know that your bucket must not contain upper
1299 case letters or any other characters that are not valid parts of a
1300 hostname. Consequently, for reasons of backwards compatibility, use of
1301 subdomain based bucket addressing is not enabled by default.
1302
1303 Note that you will need to use --s3-use-new-style for all operations on
1304 European buckets; not just upon initial creation.
1305
1306 You only need to use --s3-european-buckets upon initial creation, but
1307 you may may use it at all times for consistency.
1308
1309 Further note that when creating a new European bucket, it can take a
1310 while before the bucket is fully accessible. At the time of this
1311 writing it is unclear to what extent this is an expected feature of
1312 Amazon S3, but in practice you may experience timeouts, socket errors
1313 or HTTP errors when trying to upload files to your newly created
1314 bucket. Give it a few minutes and the bucket should function normally.
1315
1316
1318 Filename prefixes can be used in conjunction with S3 lifecycle rules to
1319 transition archive files to Glacier, while keeping metadata (signature
1320 and manifest files) on S3.
1321
1322 Duplicity does not require access to archive files except when
1323 restoring from backup.
1324
1325
1327 Support for Google Cloud Storage relies on its Interoperable Access,
1328 which must be enabled for your account. Once enabled, you can generate
1329 Interoperable Storage Access Keys and pass them to duplicity via the
1330 GS_ACCESS_KEY_ID and GS_SECRET_ACCESS_KEY environment variables.
1331 Alternatively, you can run gsutil config -a to have the Google Cloud
1332 Storage utility populate the ~/.boto configuration file.
1333
1334 Enable Interoperable Access:
1335 https://code.google.com/apis/console#:storage
1336 Create Access Keys:
1337 https://code.google.com/apis/console#:storage:legacy
1338
1339
1341 The hubic backend requires the pyrax library to be installed on the
1342 system. See REQUIREMENTS above. You will need to set your credentials
1343 for hubiC in a file called ~/.hubic_credentials, following this
1344 pattern:
1345
1346 [hubic]
1347 email = your_email
1348 password = your_password
1349 client_id = api_client_id
1350 client_secret = api_secret_key
1351 redirect_uri = http://localhost/
1352
1353
1355 An IMAP account can be used as a target for the upload. The userid may
1356 be specified and the password will be requested.
1357
1358 The from_address_prefix may be specified (and probably should be). The
1359 text will be used as the "From" address in the IMAP server. Then on a
1360 restore (or list) command the from_address_prefix will distinguish
1361 between different backups.
1362
1363
1365 The multi backend allows duplicity to combine the storage available in
1366 more than one backend store (e.g., you can store across a google drive
1367 account and a onedrive account to get effectively the combined storage
1368 available in both). The URL path specifies a JSON formated config file
1369 containing a list of the backends it will use. The URL may also specify
1370 "query" parameters to configure overall behavior. Each element of the
1371 list must have a "url" element, and may also contain an optional
1372 "description" and an optional "env" list of environment variables used
1373 to configure that backend.
1374
1375 Query Parameters
1376 Query parameters come after the file URL in standard HTTP format for
1377 example:
1378 multi:///path/to/config.json?mode=mirror&onfail=abort
1379 multi:///path/to/config.json?mode=stripe&onfail=continue
1380 multi:///path/to/config.json?onfail=abort&mode=stripe
1381 multi:///path/to/config.json?onfail=abort
1382 Order does not matter, however unrecognized parameters are considered
1383 an error.
1384
1385 mode=stripe
1386 This mode (the default) performs round-robin access to the list
1387 of backends. In this mode, all backends must be reliable as a
1388 loss of one means a loss of one of the archive files.
1389
1390 mode=mirror
1391 This mode accesses backends as a RAID1-store, storing every file
1392 in every backend and reading files from the first-successful
1393 backend. A loss of any backend should result in no failure.
1394 Note that backends added later will only get new files and may
1395 require a manual sync with one of the other operating ones.
1396
1397 onfail=continue
1398 This setting (the default) continues all write operations in as
1399 best-effort. Any failure results in the next backend tried.
1400 Failure is reported only when all backends fail a given
1401 operation with the error result from the last failure.
1402
1403 onfail=abort
1404 This setting considers any backend write failure as a
1405 terminating condition and reports the error. Data reading and
1406 listing operations are independent of this and will try with the
1407 next backend on failure.
1408
1409 JSON File Example
1410 [
1411 {
1412 "description": "a comment about the backend"
1413 "url": "abackend://myuser@domain.com/backup",
1414 "env": [
1415 {
1416 "name" : "MYENV",
1417 "value" : "xyz"
1418 },
1419 {
1420 "name" : "FOO",
1421 "value" : "bar"
1422 }
1423 ]
1424 },
1425 {
1426 "url": "file:///path/to/dir"
1427 }
1428 ]
1429
1430
1432 Par2 Wrapper Backend can be used in combination with all other backends
1433 to create recovery files. Just add par2+ before a regular scheme (e.g.
1434 par2+ftp://user@host/dir or par2+s3+http://bucket_name ). This will
1435 create par2 recovery files for each archive and upload them all to the
1436 wrapped backend.
1437
1438 Before restoring, archives will be verified. Corrupt archives will be
1439 repaired on the fly if there are enough recovery blocks available.
1440
1441 Use --par2-redundancy percent to adjust the size (and redundancy) of
1442 recovery files in percent.
1443
1444
1446 The pydrive backend requires Python PyDrive package to be installed on
1447 the system. See REQUIREMENTS above.
1448
1449 There are two ways to use PyDrive: with a regular account or with a
1450 "service account". With a service account, a separate account is
1451 created, that is only accessible with Google APIs and not a web login.
1452 With a regular account, you can store backups in your normal Google
1453 Drive.
1454
1455 To use a service account, go to the Google developers console at
1456 https://console.developers.google.com. Create a project, and make sure
1457 Drive API is enabled for the project. Under "APIs and auth", click
1458 Create New Client ID, then select Service Account with P12 key.
1459
1460 Download the .p12 key file of the account and convert it to the .pem
1461 format:
1462 openssl pkcs12 -in XXX.p12 -nodes -nocerts > pydriveprivatekey.pem
1463
1464 The content of .pem file should be passed to GOOGLE_DRIVE_ACCOUNT_KEY
1465 environment variable for authentification.
1466
1467 The email address of the account will be used as part of URL. See URL
1468 FORMAT above.
1469
1470 The alternative is to use a regular account. To do this, start as
1471 above, but when creating a new Client ID, select "Installed
1472 application" of type "Other". Create a file with the following content,
1473 and pass its filename in the GOOGLE_DRIVE_SETTINGS environment
1474 variable:
1475
1476 client_config_backend: settings
1477 client_config:
1478 client_id: <Client ID from developers' console>
1479 client_secret: <Client secret from developers' console>
1480 save_credentials: True
1481 save_credentials_backend: file
1482 save_credentials_file: <filename to cache credentials>
1483 get_refresh_token: True
1484
1485 In this scenario, the username and host parts of the URL play no role;
1486 only the path matters. During the first run, you will be prompted to
1487 visit an URL in your browser to grant access to your drive. Once
1488 granted, you will receive a verification code to paste back into
1489 Duplicity. The credentials are then cached in the file references above
1490 for future use.
1491
1492
1494 The ssh backends support sftp and scp/ssh transport protocols. This is
1495 a known user-confusing issue as these are fundamentally different. If
1496 you plan to access your backend via one of those please inform yourself
1497 about the requirements for a server to support sftp or scp/ssh access.
1498 To make it even more confusing the user can choose between several ssh
1499 backends via a scheme prefix: paramiko+ (default), pexpect+, lftp+... .
1500 paramiko & pexpect support --use-scp, --ssh-askpass and --ssh-options.
1501 Only the pexpect backend allows to define --scp-command and --sftp-
1502 command.
1503
1504 SSH paramiko backend (default) is a complete reimplementation of ssh
1505 protocols natively in python. Advantages are speed and maintainability.
1506 Minor disadvantage is that extra packages are needed as listed in
1507 REQUIREMENTS above. In sftp (default) mode all operations are done via
1508 the according sftp commands. In scp mode ( --use-scp ) though scp
1509 access is used for put/get operations but listing is done via ssh
1510 remote shell.
1511
1512 SSH pexpect backend is the legacy ssh backend using the command line
1513 ssh binaries via pexpect. Older versions used scp for get and put
1514 operations and sftp for list and delete operations. The current
1515 version uses sftp for all four supported operations, unless the --use-
1516 scp option is used to revert to old behavior.
1517
1518 SSH lftp backend is simply there because lftp can interact with the ssh
1519 cmd line binaries. It is meant as a last resort in case the above
1520 options fail for some reason.
1521
1522 Why use sftp instead of scp? The change to sftp was made in order to
1523 allow the remote system to chroot the backup, thus providing better
1524 security and because it does not suffer from shell quoting issues like
1525 scp. Scp also does not support any kind of file listing, so sftp or
1526 ssh access will always be needed in addition for this backend mode to
1527 work properly. Sftp does not have these limitations but needs an sftp
1528 service running on the backend server, which is sometimes not an
1529 option.
1530
1531
1533 Certificate verification as implemented right now [02.2016] only in the
1534 webdav and lftp backends. older pythons 2.7.8- and older lftp binaries
1535 need a file based database of certification authority certificates
1536 (cacert file).
1537 Newer python 2.7.9+ and recent lftp versions however support the system
1538 default certificates (usually in /etc/ssl/certs) and also giving an
1539 alternative ca cert folder via --ssl-cacert-path.
1540
1541 The cacert file has to be a PEM formatted text file as currently
1542 provided by the CURL project. See
1543
1544 http://curl.haxx.se/docs/caextract.html
1545
1546 After creating/retrieving a valid cacert file you should copy it to
1547 either
1548
1549 ~/.duplicity/cacert.pem
1550 ~/duplicity_cacert.pem
1551 /etc/duplicity/cacert.pem
1552
1553 Duplicity searches it there in the same order and will fail if it can't
1554 find it. You can however specify the option --ssl-cacert-file <file>
1555 to point duplicity to a copy in a different location.
1556
1557 Finally there is the --ssl-no-check-certificate option to disable
1558 certificate verification alltogether, in case some ssl library is
1559 missing or verification is not wanted. Use it with care, as even with
1560 self signed servers manually providing the private ca certificate is
1561 definitely the safer option.
1562
1563
1565 Swift is the OpenStack Object Storage service.
1566 The backend requires python-switclient to be installed on the system.
1567 python-keystoneclient is also needed to use OpenStack's Keystone
1568 Identity service. See REQUIREMENTS above.
1569
1570 It uses following environment variables for authentification:
1571 SWIFT_USERNAME (required), SWIFT_PASSWORD (required), SWIFT_AUTHURL
1572 (required), SWIFT_USERID (required, only for IBM Bluemix
1573 ObjectStorage), SWIFT_TENANTID (required, only for IBM Bluemix
1574 ObjectStorage), SWIFT_REGIONNAME (required, only for IBM Bluemix
1575 ObjectStorage), SWIFT_TENANTNAME (optional, the tenant can be included
1576 in the username)
1577
1578 If the user was previously authenticated, the following environment
1579 variables can be used instead: SWIFT_PREAUTHURL (required),
1580 SWIFT_PREAUTHTOKEN (required)
1581
1582 If SWIFT_AUTHVERSION is unspecified, it will default to version 1.
1583
1584
1586 This backend requires mediafire python library to be installed on the
1587 system. See REQUIREMENTS.
1588
1589 Use URL escaping for username (and password, if provided via command
1590 line):
1591
1592
1593 mf://duplicity%40example.com@mediafire.com/some_folder
1594
1595 The destination folder will be created for you if it does not exist.
1596
1597
1599 Signing and symmetrically encrypt at the same time with the gpg binary
1600 on the command line, as used within duplicity, is a specifically
1601 challenging issue. Tests showed that the following combinations proved
1602 working.
1603
1604 1. Setup gpg-agent properly. Use the option --use-agent and enter both
1605 passphrases (symmetric and sign key) in the gpg-agent's dialog.
1606
1607 2. Use a PASSPHRASE for symmetric encryption of your choice but the
1608 signing key has an empty passphrase.
1609
1610 3. The used PASSPHRASE for symmetric encryption and the passphrase of
1611 the signing key are identical.
1612
1613
1615 Hard links currently unsupported (they will be treated as non-linked
1616 regular files).
1617
1618 Bad signatures will be treated as empty instead of logging appropriate
1619 error message.
1620
1621
1623 This section describes duplicity's basic operation and the format of
1624 its data files. It should not necessary to read this section to use
1625 duplicity.
1626
1627 The files used by duplicity to store backup data are tarfiles in GNU
1628 tar format. They can be produced independently by rdiffdir(1). For
1629 incremental backups, new files are saved normally in the tarfile. But
1630 when a file changes, instead of storing a complete copy of the file,
1631 only a diff is stored, as generated by rdiff(1). If a file is deleted,
1632 a 0 length file is stored in the tar. It is possible to restore a
1633 duplicity archive "manually" by using tar and then cp, rdiff, and rm as
1634 necessary. These duplicity archives have the extension difftar.
1635
1636 Both full and incremental backup sets have the same format. In effect,
1637 a full backup set is an incremental one generated from an empty
1638 signature (see below). The files in full backup sets will start with
1639 duplicity-full while the incremental sets start with duplicity-inc.
1640 When restoring, duplicity applies patches in order, so deleting, for
1641 instance, a full backup set may make related incremental backup sets
1642 unusable.
1643
1644 In order to determine which files have been deleted, and to calculate
1645 diffs for changed files, duplicity needs to process information about
1646 previous sessions. It stores this information in the form of tarfiles
1647 where each entry's data contains the signature (as produced by rdiff)
1648 of the file instead of the file's contents. These signature sets have
1649 the extension sigtar.
1650
1651 Signature files are not required to restore a backup set, but without
1652 an up-to-date signature, duplicity cannot append an incremental backup
1653 to an existing archive.
1654
1655 To save bandwidth, duplicity generates full signature sets and
1656 incremental signature sets. A full signature set is generated for each
1657 full backup, and an incremental one for each incremental backup. These
1658 start with duplicity-full-signatures and duplicity-new-signatures
1659 respectively. These signatures will be stored both locally and
1660 remotely. The remote signatures will be encrypted if encryption is
1661 enabled. The local signatures will not be encrypted and stored in the
1662 archive dir (see --archive-dir ).
1663
1664
1666 Duplicity requires a POSIX-like operating system with a python
1667 interpreter version 2.6+ installed. It is best used under GNU/Linux.
1668
1669 Some backends also require additional components (probably available as
1670 packages for your specific platform):
1671
1672 azure backend (Azure Blob Storage Service)
1673 Microsoft Azure Storage SDK for Python -
1674 https://pypi.python.org/pypi/azure-storage/
1675
1676 boto backend (S3 Amazon Web Services, Google Cloud Storage)
1677 boto version 2.0+ - http://github.com/boto/boto
1678
1679 cfpyrax backend (Rackspace Cloud) and hubic backend (hubic.com)
1680 Rackspace CloudFiles Pyrax API -
1681 http://docs.rackspace.com/sdks/guide/content/python.html
1682
1683 dpbx backend (Dropbox)
1684 Dropbox Python SDK -
1685 https://www.dropbox.com/developers/reference/sdk
1686
1687 copy backend (Copy.com)
1688 python-urllib3 - https://github.com/shazow/urllib3
1689
1690 gdocs gdata backend (legacy Google Docs backend)
1691 Google Data APIs Python Client Library -
1692 http://code.google.com/p/gdata-python-client/
1693
1694 gdocs pydrive backend(default)
1695 see pydrive backend
1696
1697 gio backend (Gnome VFS API)
1698 PyGObject - http://live.gnome.org/PyGObject
1699 D-Bus (dbus)- http://www.freedesktop.org/wiki/Software/dbus
1700
1701 lftp backend (needed for ftp, ftps, fish [over ssh] - also supports
1702 sftp, webdav[s])
1703 LFTP Client - http://lftp.yar.ru/
1704
1705 mega backend (mega.co.nz)
1706 megatools client - https://github.com/megous/megatools
1707
1708 multi backend
1709 Multi -- store to more than one backend
1710 (also see A NOTE ON MULTI BACKEND ) below.
1711
1712 ncftp backend (ftp, select via ncftp+ftp://)
1713 NcFTP - http://www.ncftp.com/
1714
1715 OneDrive backend (Microsoft OneDrive)
1716 python-requests - http://python-requests.org
1717 python-requests-oauthlib - https://github.com/requests/requests-
1718 oauthlib
1719
1720 Par2 Wrapper Backend
1721 par2cmdline - http://parchive.sourceforge.net/
1722
1723 pydrive backend
1724 PyDrive -- a wrapper library of google-api-python-client -
1725 https://pypi.python.org/pypi/PyDrive
1726 (also see A NOTE ON PYDRIVE BACKEND ) below.
1727
1728 rsync backend
1729 rsync client binary - http://rsync.samba.org/
1730
1731 ssh paramiko backend (default)
1732 paramiko (SSH2 for python) -
1733 http://pypi.python.org/pypi/paramiko (downloads);
1734 http://github.com/paramiko/paramiko (project page)
1735 pycrypto (Python Cryptography Toolkit) -
1736 http://www.dlitz.net/software/pycrypto/
1737
1738 ssh pexpect backend
1739 sftp/scp client binaries OpenSSH - http://www.openssh.com/
1740 Python pexpect module -
1741 http://pexpect.sourceforge.net/pexpect.html
1742
1743 swift backend (OpenStack Object Storage)
1744 Python swiftclient module - https://github.com/openstack/python-
1745 swiftclient/
1746 Python keystoneclient module -
1747 https://github.com/openstack/python-keystoneclient/
1748
1749 webdav backend
1750 certificate authority database file for ssl certificate
1751 verification of HTTPS connections -
1752 http://curl.haxx.se/docs/caextract.html
1753 (also see A NOTE ON SSL CERTIFICATE VERIFICATION).
1754 Python kerberos module for kerberos authentication -
1755 https://github.com/02strich/pykerberos
1756
1757 MediaFire backend
1758 MediaFire Python Open SDK -
1759 https://pypi.python.org/pypi/mediafire/
1760
1761
1763 Original Author - Ben Escoto <bescoto@stanford.edu>
1764
1765 Current Maintainer - Kenneth Loafman <kenneth@loafman.com>
1766
1767 Continuous Contributors
1768 Edgar Soldin, Mike Terry
1769
1770 Most backends were contributed individually. Information about their
1771 authorship may be found in the according file's header.
1772
1773 Also we'd like to thank everybody posting issues to the mailing list or
1774 on launchpad, sending in patches or contributing otherwise. Duplicity
1775 wouldn't be as stable and useful if it weren't for you.
1776
1777 A special thanks goes to rsync.net, a Cloud Storage provider with
1778 explicit support for duplicity, for several monetary donations and for
1779 providing a special "duplicity friends" rate for their offsite backup
1780 service. Email info@rsync.net for details.
1781
1782
1784 rdiffdir(1), python(1), rdiff(1), rdiff-backup(1).
1785
1786
1787
1788Version 0.7.19 April 29, 2019 DUPLICITY(1)