1DUPLICITY(1) User Manuals DUPLICITY(1)
2
3
4
6 duplicity - Encrypted incremental backup to local or remote storage.
7
8
10 For detailed descriptions for each action see chapter ACTIONS.
11
12 duplicity [backup|full|incremental] [options] source_directory
13 target_url
14
15 duplicity verify [options] [--compare-data] [--path-to-restore
16 <relpath>] [--time time] source_url target_directory
17
18 duplicity collection-status [options] [--file-changed <relpath>]
19 [--show-changes-in-set <index>] [--jsonstat]] target_url
20
21 duplicity list-current-files [options] [--time time] target_url
22
23 duplicity [restore] [options] [--path-to-restore <relpath>] [--time
24 time] source_url target_directory
25
26 duplicity remove-older-than <time> [options] [--force] target_url
27
28 duplicity remove-all-but-n-full <count> [options] [--force] target_url
29
30 duplicity remove-all-inc-of-but-n-full <count> [options] [--force]
31 target_url
32
33 duplicity cleanup [options] [--force] target_url
34
35
37 Duplicity incrementally backs up files and folders into tar-format
38 volumes encrypted with GnuPG and places them to a remote (or local)
39 storage backend. See chapter URL FORMAT for a list of all supported
40 backends and how to address them. Because duplicity uses librsync,
41 incremental backups are space efficient and only record the parts of
42 files that have changed since the last backup. Currently duplicity
43 supports deleted files, full Unix permissions, uid/gid, directories,
44 symbolic links, fifos, etc., but not hard links.
45
46 If you are backing up the root directory /, remember to --exclude
47 /proc, or else duplicity will probably crash on the weird stuff in
48 there.
49
50
52 Here is an example of a backup, using sftp to back up /home/me to
53 some_dir on the other.host machine:
54
55 duplicity /home/me sftp://uid@other.host/some_dir
56
57 If the above is run repeatedly, the first will be a full backup, and
58 subsequent ones will be incremental. To force a full backup, use the
59 full action:
60
61 duplicity full /home/me sftp://uid@other.host/some_dir
62
63 or enforcing a full every other time via --full-if-older-than <time> ,
64 e.g. a full every month:
65
66 duplicity --full-if-older-than 1M /home/me
67 sftp://uid@other.host/some_dir
68
69 Now suppose we accidentally delete /home/me and want to restore it the
70 way it was at the time of last backup:
71
72 duplicity sftp://uid@other.host/some_dir /home/me
73
74 Duplicity enters restore mode because the URL comes before the local
75 directory. If we wanted to restore just the file "Mail/article" in
76 /home/me as it was three days ago into /home/me/restored_file:
77
78 duplicity -t 3D --path-to-restore Mail/article
79 sftp://uid@other.host/some_dir /home/me/restored_file
80
81 The following action compares the latest backup with the current files:
82
83 duplicity verify sftp://uid@other.host/some_dir /home/me
84
85 Finally, duplicity recognizes several include/exclude options. For
86 instance, the following will backup the root directory, but exclude
87 /mnt, /tmp, and /proc:
88
89 duplicity --exclude /mnt --exclude /tmp --exclude /proc /
90 file:///usr/local/backup
91
92 Note that in this case the destination is the local directory
93 /usr/local/backup. The following will backup only the /home and /etc
94 directories under root:
95
96 duplicity --include /home --include /etc --exclude '**' /
97 file:///usr/local/backup
98
99 Duplicity can also access a repository via ftp. If a user name is
100 given, the environment variable FTP_PASSWORD is read to determine the
101 password:
102
103 FTP_PASSWORD=mypassword duplicity /local/dir
104 ftp://user@other.host/some_dir
105
106
108 Duplicity uses actions, which can be given in long or in short form and
109 finetuned with options.
110 The actions 'backup' or 'restore' can be implied from the order local
111 path and remote url are given. Other actions need to be given
112 explicitly. For the rare case that the local path may be a valid
113 duplicity action name you may append a '/' to the local path name so it
114 can no longer be mistaken for an action.
115
116
117 NOTE: The following explanations explain some but not all options that
118 can be used in connection with that action. Consult the OPTIONS
119 section for more detailed descriptions.
120
121
122 backup, bu <folder> <url>
123 Perform a backup. Duplicity automatically performs an
124 incremental backup if old signatures can be found. Else a new
125 backup chain is started.
126
127
128 full, fb <folder> <url>
129 Perform a full backup. A new backup chain is started even if
130 signatures are available for an incremental backup.
131
132
133 incremental, ib <folder> <url>
134 If this is requested an incremental backup will be performed.
135 Duplicity will abort if no old signatures can be found.
136
137
138 verify, vb [--compare-data] [--time <time>] [--path-to-restore
139 <rel_path>] <url> <local_path>
140 Verify tests the integrity of the backup archives at the remote
141 location by downloading each file and checking both that it can
142 restore the archive and that the restored file matches the
143 signature of that file stored in the backup, i.e. compares the
144 archived file with its hash value from archival time. Verify
145 does not actually restore and will not overwrite any local
146 files. Duplicity will exit with a non-zero error level if any
147 files do not match the signature stored in the archive for that
148 file. On verbosity level 4 or higher, it will log a message for
149 each file that differs from the stored signature. Files must be
150 downloaded to the local machine in order to compare them.
151 Verify does not compare the backed-up version of the file to the
152 current local copy of the files unless the --compare-data option
153 is used (see below).
154 The --path-to-restore option restricts verify to that file or
155 folder. The --time option allows one to select a backup to
156 verify. The --compare-data option enables data comparison (see
157 below).
158
159
160 collection-status, st [--file-changed <relpath>] [--show-changes-in-set
161 <index>] <url>
162 Summarize the status of the backup repository by printing the
163 chains and sets found, and the number of volumes in each.
164 The --file-changed option summarizes the changes to the file (in
165 the most recent backup chain). The --show-changes-in-set option
166 summarizes all the file changes in the index:th backup set
167 (where index 0 means the latest set, 1 means the next to latest,
168 etc.). --jsonstat prints the changes in json format and
169 statistics from the jsonstat files if the backups were created
170 with --jsonstat. If <index> is set to -1 statistics for the
171 whole backup chain printed
172
173
174 list-current-files, ls [--time <time>] <url>
175 Lists the files contained in the most current backup or backup
176 at time. The information will be extracted from the signature
177 files, not the archive data itself. Thus the whole archive does
178 not have to be downloaded, but on the other hand if the archive
179 has been deleted or corrupted, this action will not detect it.
180
181
182 restore, rb [--path-to-restore <relpath>] [--time <time>] <url>
183 <target_folder>
184 You can restore the full monty or selected folders/files from a
185 specific time. Use the relative path as it is printed by list-
186 current-files. Usually not needed as duplicity enters restore
187 mode when it detects that the URL comes before the local folder.
188
189
190 remove-older-than, ro <time> [--force] <url>
191 Delete all backup sets older than the given time. Old backup
192 sets will not be deleted if backup sets newer than time depend
193 on them. See the TIME FORMATS section for more information.
194 Note, this action cannot be combined with backup or other
195 actions, such as cleanup. Note also that --force will be needed
196 to delete the files instead of just listing them.
197
198
199 remove-all-but-n-full, ra <count> [--force] <url>
200 Delete all backups sets that are older than the count:th last
201 full backup (in other words, keep the last count full backups
202 and associated incremental sets). count must be larger than
203 zero. A value of 1 means that only the single most recent backup
204 chain will be kept. Note that --force will be needed to delete
205 the files instead of just listing them.
206
207
208 remove-all-inc-of-but-n-full, ri <count> [--force] <url>
209 Delete incremental sets of all backups sets that are older than
210 the count:th last full backup (in other words, keep only old
211 full backups and not their increments). count must be larger
212 than zero. A value of 1 means that only the single most recent
213 backup chain will be kept intact. Note that --force will be
214 needed to delete the files instead of just listing them.
215
216
217 cleanup, cl [--force] <url>
218 Delete the extraneous duplicity files on the given backend.
219 Non-duplicity files, or files in complete data sets will not be
220 deleted. This should only be necessary after a duplicity
221 session fails or is aborted prematurely. Note that --force will
222 be needed to delete the files instead of just listing them.
223
224
226 --allow-source-mismatch
227 Do not abort on attempts to use the same archive dir or remote
228 backend to back up different directories. duplicity will tell
229 you if you need this switch.
230
231
232 --archive-dir path
233 The archive directory.
234
235 NOTE: This option changed in 0.6.0. The archive directory is
236 now necessary in order to manage persistence for current and
237 future enhancements. As such, this option is now used only to
238 change the location of the archive directory. The archive
239 directory should not be deleted, or duplicity will have to
240 recreate it from the remote repository (which may require
241 decrypting the backup contents).
242
243 When backing up or restoring, this option specifies that the
244 local archive directory is to be created in path. If the
245 archive directory is not specified, the default will be to
246 create the archive directory in ~/.cache/duplicity/.
247
248 The archive directory can be shared between backups to multiple
249 targets, because a subdirectory of the archive dir is used for
250 individual backups (see --name ).
251
252 The combination of archive directory and backup name must be
253 unique in order to separate the data of different backups.
254
255 The interaction between the --archive-dir and the --name options
256 allows for four possible combinations for the location of the
257 archive dir:
258
259
260 1. neither specified (default)
261 ~/.cache/duplicity/hash-of-url
262
263 2. --archive-dir=/arch, no --name
264 /arch/hash-of-url
265
266 3. no --archive-dir, --name=foo
267 ~/.cache/duplicity/foo
268
269 4. --archive-dir=/arch, --name=foo
270 /arch/foo
271
272
273 --asynchronous-upload
274 (EXPERIMENTAL) Perform file uploads asynchronously in the
275 background, with respect to volume creation. This means that
276 duplicity can upload a volume while, at the same time, preparing
277 the next volume for upload. The intended end-result is a faster
278 backup, because the local CPU and your bandwidth can be more
279 consistently utilized. Use of this option implies additional
280 need for disk space in the temporary storage location; rather
281 than needing to store only one volume at a time, enough storage
282 space is required to store two volumes.
283
284
285 --azure-blob-tier
286 Standard storage tier used for backup files (Hot|Cool|Archive).
287
288
289 --azure-max-single-put-size
290 Specify the number of the largest supported upload size where
291 the Azure library makes only one put call. If the content size
292 is known and below this value the Azure library will only
293 perform one put request to upload one block. The number is
294 expected to be in bytes.
295
296
297 --azure-max-block-size
298 Specify the number for the block size used by the Azure library
299 to upload blobs if it is split into multiple blocks. The
300 maximum block size the service supports is 104857600 (100MiB)
301 and the default is 4194304 (4MiB)
302
303
304 --azure-max-connections
305 Specify the number of maximum connections to transfer one blob
306 to Azure blob size exceeds 64MB. The default values is 2.
307
308
309 --b2-hide-files
310 Causes Duplicity to hide files in B2 instead of deleting them.
311 Useful in combination with B2's lifecycle rules.
312
313
314 --backend-retry-delay number
315 Specifies the number of seconds that duplicity waits after an
316 error has occurred before attempting to repeat the operation.
317
318
319 --cf-backend backend
320 Allows the explicit selection of a cloudfiles backend. Defaults
321 to pyrax. Alternatively you might choose cloudfiles.
322
323
324 --config-dir path
325 Allows selection of duplicity's configuratin dir. Defaults to
326 ~/.config/duplicity.
327
328
329 --copy-blocksize kilos
330 Allows selection of blocksize in kilobytes to use in copying.
331 Increasing this may speed copying of large files. Defaults to
332 128.
333
334 --no-check-remote
335 Turn off validation of the remote manifest. Checking is the
336 default. No checking will allow you to backup without the
337 private key, but will mean that the remote manifest may exist
338 and be corrupted, leading to the possibility that the backup
339 might not be recoverable.
340
341
342 --compare-data
343 Enable data comparison of regular files on action verify. This
344 conducts a verify as described above to verify the integrity of
345 the backup archives, but additionally compares restored files to
346 those in target_directory. Duplicity will not replace any files
347 in target_directory. Duplicity will exit with a non-zero error
348 level if the files do not correctly verify or if any files from
349 the archive differ from those in target_directory. On verbosity
350 level 4 or higher, it will log a message for each file that
351 differs from its equivalent in target_directory.
352
353
354 --copy-links
355 Resolve symlinks during backup. Enabling this will resolve &
356 back up the symlink's file/folder data instead of the symlink
357 itself, potentially increasing the size of the backup.
358
359
360 --dry-run
361 Calculate what would be done, but do not perform any backend
362 actions
363
364
365 --encrypt-key key-id
366 When backing up, encrypt to the given public key, instead of
367 using symmetric (traditional) encryption. Can be specified
368 multiple times. The key-id can be given in any of the formats
369 supported by GnuPG; see gpg(1), section "HOW TO SPECIFY A USER
370 ID" for details.
371
372
373 --encrypt-secret-keyring filename
374 This option can only be used with --encrypt-key, and changes the
375 path to the secret keyring for the encrypt key to filename This
376 keyring is not used when creating a backup. If not specified,
377 the default secret keyring is used which is usually located at
378 .gnupg/secring.gpg
379
380
381 --encrypt-sign-key key-id
382 Convenience parameter. Same as --encrypt-key key-id --sign-key
383 key-id.
384
385
386 --exclude shell_pattern
387 Exclude the file or files matched by shell_pattern. If a
388 directory is matched, then files under that directory will also
389 be matched. See the FILE SELECTION section for more
390 information.
391
392
393 --exclude-device-files
394 Exclude all device files. This can be useful for
395 security/permissions reasons or if duplicity is not handling
396 device files correctly.
397
398
399 --exclude-filelist filename
400 Excludes the files listed in filename, with each line of the
401 filelist interpreted according to the same rules as --include
402 and --exclude. See the FILE SELECTION section for more
403 information.
404
405
406 --exclude-if-present filename
407 Exclude directories if filename is present. Allows the user to
408 specify folders that they do not wish to backup by adding a
409 specified file (e.g. ".nobackup") instead of maintaining a
410 comprehensive exclude/include list.
411
412
413 --exclude-older-than time
414 Exclude any files whose modification date is earlier than the
415 specified time. This can be used to produce a partial backup
416 that contains only recently changed files. See the TIME FORMATS
417 section for more information.
418
419
420 --exclude-other-filesystems
421 Exclude files on file systems (identified by device number)
422 other than the file system the root of the source directory is
423 on.
424
425
426 --exclude-regexp regexp
427 Exclude files matching the given regexp. Unlike the --exclude
428 option, this option does not match files in a directory it
429 matches. See the FILE SELECTION section for more information.
430
431
432 --files-from filename
433 Read a list of files to backup from filename rather than
434 searching the entire backup source directory. Operation is
435 otherwise normal, just on the specified subset of the backup
436 source directory.
437
438 Files must be specified one per line and relative to the backup
439 source directory. Any absolute paths will raise an error. All
440 characters per line are significant and treated as part of the
441 path, including leading and trailing whitespace. Lines are
442 separated by newlines or nulls, depending on whether the --null-
443 separator switch was given.
444
445 It is not necessary to include the parent directory of listed
446 files, their inclusion is implied. However, the content of any
447 explicitly listed directories is not implied. All required files
448 must be listed when this option is used.
449
450
451 --file-prefix prefix
452 --file-prefix-manifest prefix
453 --file-prefix-archive prefix
454 --file-prefix-signature prefix
455 Adds a prefix to either all files or only manifest, archive,
456 signature files. The same set of prefixes must be passed in on
457 backup and restore.
458 If both global and type-specific prefixes are set, global prefix
459 will go before type-specific prefixes.
460
461 See also A NOTE ON FILENAME PREFIXES
462
463 --path-to-restore path
464 This option may be given in restore mode, causing only path to
465 be restored instead of the entire contents of the backup
466 archive. path should be given relative to the root of the
467 directory backed up.
468
469 --filter-globbing
470 --filter-ignorecase
471 --filter-literal
472 --filter-regexp
473 --filter-strictcase
474 Change the interpretation of patterns passed to the file
475 selection condition option arguments --exclude and --include
476 (and variations thereof, including file lists). These options
477 can appear multiple times to switch between shell globbing
478 (default), literal strings, and regular expressions, case
479 sensitive (default) or not. The specified interpretation applies
480 for all subsequent selection conditions up until the next
481 --filter option.
482
483 See the FILE SELECTION section for more information.
484
485 --full-if-older-than time
486 Perform a full backup if an incremental backup is requested, but
487 the latest full backup in the collection is older than the given
488 time. See the TIME FORMATS section for more information.
489
490 --force
491 Proceed even if data loss might result. Duplicity will let the
492 user know when this option is required.
493
494 --ftp-passive
495 Use passive (PASV) data connections. The default is to use
496 passive, but to fallback to regular if the passive connection
497 fails or times out.
498
499 --ftp-regular
500 Use regular (PORT) data connections.
501
502 --gio Use the GIO backend and interpret any URLs as GIO would.
503
504 --gpg-binary file_path
505 Allows you to force duplicity to use file_path as gpg command
506 line binary. Can be an absolute or relative file path or a file
507 name. Default value is 'gpg'. The binary will be localized via
508 the PATH environment variable.
509
510 --gpg-options options
511 Allows you to pass options to gpg encryption. The options list
512 should be of the form "--opt1 --opt2=parm" where the string is
513 quoted and the only spaces allowed are between options.
514
515 --hidden-encrypt-key key-id
516 Same as --encrypt-key, but it hides user's key id from encrypted
517 file. It uses the gpg's --hidden-recipient command to obfuscate
518 the owner of the backup. On restore, gpg will automatically try
519 all available secret keys in order to decrypt the backup. See
520 gpg(1) for more details.
521
522 --ignore-errors
523 Try to ignore certain errors if they happen. This option is only
524 intended to allow the restoration of a backup in the face of
525 certain problems that would otherwise cause the backup to fail.
526 It is not ever recommended to use this option unless you have a
527 situation where you are trying to restore from backup and it is
528 failing because of an issue which you want duplicity to ignore.
529 Even then, depending on the issue, this option may not have an
530 effect.
531
532 Please note that while ignored errors will be logged, there will
533 be no summary at the end of the operation to tell you what was
534 ignored, if anything. If this is used for emergency restoration
535 of data, it is recommended that you run the backup in such a way
536 that you can revisit the backup log (look for lines containing
537 the string IGNORED_ERROR).
538
539 If you ever have to use this option for reasons that are not
540 understood or understood but not your own responsibility, please
541 contact duplicity maintainers. The need to use this option under
542 production circumstances would normally be considered a bug.
543
544 --imap-full-address email_address
545 The full email address of the user name when logging into an
546 imap server. If not supplied just the user name part of the
547 email address is used.
548
549 --imap-mailbox option
550 Allows you to specify a different mailbox. The default is
551 "INBOX". Other languages may require a different mailbox than
552 the default.
553
554 --idr-fakeroot
555 idrived uses the concept of a "fakeroot" directory, defined via
556 the --idr-fakeroot=... switch. This can be an existing
557 directory, or the directory is created at runtime on the root of
558 the (host) files system. (caveat: you have to have write access
559 to the root!). Directories created at runtime are auto-removed
560 on exit!
561 So, in the above scheme, we could do:
562 duplicity --idr-fakeroot=nicepath idrived://DUPLICITY
563 our files end-up at
564 <MYBUCKET>/DUPLICITY/nicepath
565
566 --idr-fakeroot
567 idrived uses the concept of a "fakeroot" directory, defined via
568 the --idr-fakeroot=... switch. This can be an existing
569 directory, or the directory is created at runtime on the root of
570 the (host) files system. (caveat: you have to have write access
571 to the root!). Directories created at runtime are auto-removed
572 on exit!
573 So, in the above scheme, we could do:
574 duplicity --idr-fakeroot=nicepath idrived://DUPLICITY
575 our files end-up at
576 <MYBUCKET>/DUPLICITY/nicepath
577
578 --include shell_pattern
579 Similar to --exclude but include matched files instead. Unlike
580 --exclude, this option will also match parent directories of
581 matched files (although not necessarily their contents). See
582 the FILE SELECTION section for more information.
583
584 --include-filelist filename
585 Like --exclude-filelist, but include the listed files instead.
586 See the FILE SELECTION section for more information.
587
588 --include-regexp regexp
589 Include files matching the regular expression regexp. Only
590 files explicitly matched by regexp will be included by this
591 option. See the FILE SELECTION section for more information.
592
593 --jsonstat
594 Record statistic data similar to the default stats printed at
595 the end of a backup job, addtional it includes some meta data
596 about the backup chain e.g. when the full backup was created and
597 how many incremental backups were made before. Output format is
598 json. It written to stdout at notice level (as classic stats)
599 and the statistics are kept as a separte file next to the
600 manifest but with "jsonstat" as extension. collection-status
601 --show-changes-in-set <index> --jsonstat adds data collected in
602 the backup job and switch the output format to json. If <index>
603 is set to -1 statistics for the whole backup chain are printed.
604
605
606 --log-fd number
607 Write specially-formatted versions of output messages to the
608 specified file descriptor. The format used is designed to be
609 easily consumable by other programs.
610
611 --log-file filename
612 Write specially-formatted versions of output messages to the
613 specified file. The format used is designed to be easily
614 consumable by other programs.
615
616 --log-timestamp
617 Write the log with timestamp and log level before the message,
618 similar to syslog.
619
620 --max-blocksize number
621 determines the number of the blocks examined for changes during
622 the diff process. For files < 1MB the blocksize is a constant
623 of 512. For files over 1MB the size is given by:
624 file_blocksize = int((file_len / (2000 * 512)) * 512)
625 return min(file_blocksize, config.max_blocksize) where
626 config.max_blocksize defaults to 2048.
627
628 If you specify a larger max_blocksize, your difftar files will
629 be larger, but your sigtar files will be smaller. If you
630 specify a smaller max_blocksize, the reverse occurs. The --max-
631 blocksize option should be in multiples of 512.
632
633 --mf-purge
634 Option for mediafire to purge files on delete instead of sending
635 to trash.
636
637 --mp-segment-size megs
638 Swift backend segment size in megabytes
639
640 --name symbolicname
641 Set the symbolic name of the backup being operated on. The
642 intent is to use a separate name for each logically distinct
643 backup. For example, someone may use "home_daily_s3" for the
644 daily backup of a home directory to Amazon S3. The structure of
645 the name is up to the user, it is only important that the names
646 be distinct. The symbolic name is currently only used to affect
647 the expansion of --archive-dir , but may be used for additional
648 features in the future. Users running more than one distinct
649 backup are encouraged to use this option.
650
651 If not specified, the default value is a hash of the backend
652 URL.
653
654 --no-check-remote
655 Turn off validation of the remote manifest. Checking is the
656 default. No checking will allow you to backup without the
657 private key, but will mean that the remote manifest may exist
658 and be corrupted, leading to the possibility that the backup
659 might not be recoverable.
660
661 --no-compression
662 Do not use GZip to compress files on remote system.
663
664 --no-encryption
665 Do not use GnuPG to encrypt files on remote system.
666
667 --no-print-statistics
668 By default duplicity will print statistics about the current
669 session after a successful backup. This switch disables that
670 behavior.
671
672 --no-files-changed
673 By default duplicity will collect file names and change action
674 in memory (add, del, chg) during backup. This can be quite
675 expensive in memory use, especially with millions of small
676 files. This flag turns off that collection. This means that
677 the --file-changed option for collection-status will return
678 nothing.
679
680 --null-separator
681 Use nulls (\0) instead of newlines (\n) as line separators,
682 which may help when dealing with filenames containing newlines.
683 This affects the expected format of the files specified by the
684 --{include|exclude}-filelist switches and the --{files-from}
685 option, as well as the format of the directory statistics file.
686
687 --numeric-owner
688 On restore always use the numeric uid/gid from the archive and
689 not the archived user/group names, which is the default
690 behaviour. Recommended for restoring from live cds which might
691 have the users with identical names but different uids/gids.
692
693 --no-restore-ownership
694 Ignores the uid/gid from the archive and keeps the current
695 user's one. Recommended for restoring data to mounted
696 filesystem which do not support Unix ownership or when root
697 privileges are not available.
698
699 --num-retries number
700 Number of retries to make on errors before giving up.
701
702 --par2-options options
703 Verbatim options to pass to par2.
704
705 --par2-redundancy percent
706 Adjust the level of redundancy in percent for Par2 recovery
707 files (default 10%).
708
709 --par2-volumes number
710 Number of Par2 volumes to create (default 1).
711
712 --progress
713 When selected, duplicity will output the current upload progress
714 and estimated upload time. To annotate changes, it will perform
715 a first dry-run before a full or incremental, and then runs the
716 real operation estimating the real upload progress.
717
718 --progress-rate number
719 Sets the update rate at which duplicity will output the upload
720 progress messages (requires --progress option). Default is to
721 print the status each 3 seconds.
722
723 --rename <original path> <new path>
724 Treats the path orig in the backup as if it were the path new.
725 Can be passed multiple times. An example:
726
727 duplicity restore --rename Documents/metal Music/metal
728 sftp://uid@other.host/some_dir /home/me
729
730 --rsync-options options
731 Allows you to pass options to the rsync backend. The options
732 list should be of the form "opt1=parm1 opt2=parm2" where the
733 option string is quoted and the only spaces allowed are between
734 options. The option string will be passed verbatim to rsync,
735 after any internally generated option designating the remote
736 port to use. Here is a possibly useful example:
737
738 duplicity --rsync-options="--partial-dir=.rsync-partial"
739 /home/me rsync://uid@other.host/some_dir
740
741 --s3-endpoint-url url
742 Specifies the endpoint URL of the S3 storage.
743
744 --s3-multipart-chunk-size
745 Chunk size (in MB, default is 20MB) used for S3 multipart
746 uploads. Adjust this to maximize bandwidth usage. For example, a
747 chunk size of 10MB and a volsize of 100MB would result in 10
748 chunks per volume upload.
749
750 NOTE: This value should optimally be an even multiple of your
751 --volsize for optimal performance.
752
753 See also A NOTE ON AMAZON S3 below.
754
755 --s3-multipart-max-procs
756 Maximum number of concurrent uploads when performing a multipart
757 upload. The default is 4. You can adjust this number to
758 maximizing bandwidth and CPU utilization.
759
760 NOTE: Too many concurrent uploads may have diminishing returns.
761
762 See also A NOTE ON AMAZON S3 below.
763
764 --s3-region-name
765 Specifies the region of the S3 storage. Usually mandatory if the
766 bucket is created in a specific region.
767
768 --s3-unencrypted-connection
769 Disable SSL for connections to S3. This may be much faster, at
770 some cost to confidentiality.
771
772 With this option set, anyone between your computer and S3 can
773 observe the traffic and will be able to tell: that you are using
774 Duplicity, the name of the bucket, your AWS Access Key ID, the
775 increment dates and the amount of data in each increment.
776
777 This option affects only the connection, not the GPG encryption
778 of the backup increment files. Unless that is disabled, an
779 observer will not be able to see the file names or contents.
780
781 See also A NOTE ON AMAZON S3 below.
782
783 --s3-use-deep-archive
784 Store volumes using Glacier Deep Archive S3 when uploading to
785 Amazon S3. This storage class has a lower cost of storage but a
786 higher per-request cost along with delays of up to 48 hours from
787 the time of retrieval request. This storage cost is calculated
788 against a 180-day storage minimum. According to Amazon this
789 storage is ideal for data archiving and long-term backup
790 offering 99.999999999% durability. To restore a backup you will
791 have to manually migrate all data stored on AWS Glacier Deep
792 Archive back to Standard S3 and wait for AWS to complete the
793 migration.
794
795 NOTE: Duplicity will store the manifest.gpg and sigtar.gpg files
796 from full and incremental backups on AWS S3 standard storage to
797 allow quick retrieval for later incremental backups, all other
798 data is stored in S3 Glacier Deep Archive.
799
800 --s3-use-glacier
801 Store volumes using Glacier Flexible Storage when uploading to
802 Amazon S3. This storage class has a lower cost of storage but a
803 higher per-request cost along with delays of up to 12 hours from
804 the time of retrieval request. This storage cost is calculated
805 against a 90-day storage minimum. According to Amazon this
806 storage is ideal for data archiving and long-term backup
807 offering 99.999999999% durability. To restore a backup you will
808 have to manually migrate all data stored on AWS Glacier back to
809 Standard S3 and wait for AWS to complete the migration.
810
811 NOTE: Duplicity will store the manifest.gpg and sigtar.gpg files
812 from full and incremental backups on AWS S3 standard storage to
813 allow quick retrieval for later incremental backups, all other
814 data is stored in S3 Glacier.
815
816 --s3-use-glacier-ir
817 Store volumes using Glacier Instant Retrieval when uploading to
818 Amazon S3. This storage class is similar to Glacier Flexible
819 Storage but offers instant retrieval at standard speeds.
820
821 NOTE: Duplicity will store the manifest.gpg and sigtar.gpg files
822 from full and incremental backups on AWS S3 standard storage to
823 allow quick retrieval for later incremental backups, all other
824 data is stored in S3 Glacier.
825
826 --s3-use-ia
827 Store volumes using Standard - Infrequent Access when uploading
828 to Amazon S3. This storage class has a lower storage cost but a
829 higher per-request cost, and the storage cost is calculated
830 against a 30-day storage minimum. According to Amazon, this
831 storage is ideal for long-term file storage, backups, and
832 disaster recovery.
833
834 --s3-use-onezone-ia
835 Store volumes using One Zone - Infrequent Access when uploading
836 to Amazon S3. This storage is similar to Standard - Infrequent
837 Access, but only stores object data in one Availability Zone.
838
839 --s3-use-rrs
840 Store volumes using Reduced Redundancy Storage when uploading to
841 Amazon S3. This will lower the cost of storage but also lower
842 the durability of stored volumes to 99.99% instead the
843 99.999999999% durability offered by Standard Storage on S3.
844
845 --s3-use-server-side-kms-encryption
846 --s3-kms-key-id key_id
847 --s3-kms-grant grant
848 Enable server-side encryption using key management service.
849
850 --skip-if-no-change command
851 By default an empty incremental backup is created if no files
852 have changed. Setting this option will skip creating a backup
853 if no data has changed. Nothing will be sent to the target nor
854 information be cached.
855
856 --scp-command command
857 (only ssh pexpect backend with --use-scp enabled)
858 The command will be used instead of "scp" to send or receive
859 files. To list and delete existing files, the sftp command is
860 used.
861 See also A NOTE ON SSH BACKENDS section SSH pexpect backend.
862
863 --sftp-command command
864 (only ssh pexpect backend)
865 The command will be used instead of "sftp".
866 See also A NOTE ON SSH BACKENDS section SSH pexpect backend.
867
868 --sign-key key-id
869 This option can be used when backing up, restoring or verifying.
870 When backing up, all backup files will be signed with keyid key.
871 When restoring, duplicity will signal an error if any remote
872 file is not signed with the given key-id. The key-id can be
873 given in any of the formats supported by GnuPG; see gpg(1),
874 section "HOW TO SPECIFY A USER ID" for details. Should be
875 specified only once because currently only one signing key is
876 supported. Last entry overrides all other entries.
877 See also A NOTE ON SYMMETRIC ENCRYPTION AND SIGNING
878
879 --ssh-askpass
880 Tells the ssh backend to prompt the user for the remote system
881 password, if it was not defined in target url and no
882 FTP_PASSWORD env var is set. This password is also used for
883 passphrase-protected ssh keys.
884
885 --ssh-options options
886 Allows you to pass options to the ssh backend. Can be specified
887 multiple times or as a space separated options list. The
888 options list should be of the form "-oOpt1='parm1'
889 -oOpt2='parm2'" where the option string is quoted and the only
890 spaces allowed are between options. The option string will be
891 passed verbatim to both scp and sftp, whose command line syntax
892 differs slightly hence the options should therefore be given in
893 the long option format described in ssh_config(5).
894
895
896 example of a list:
897
898 duplicity --ssh-options="-oProtocol=2
899 -oIdentityFile='/my/backup/id'" /home/me
900 scp://user@host/some_dir
901
902
903 example with multiple parameters:
904
905 duplicity --ssh-options="-oProtocol=2" --ssh-
906 options="-oIdentityFile='/my/backup/id'" /home/me
907 scp://user@host/some_dir
908
909
910 NOTE: The ssh paramiko backend currently supports only the -i or
911 -oIdentityFile or -oUserKnownHostsFile or -oGlobalKnownHostsFile
912 settings. If needed provide more host specific options via
913 ssh_config file.
914
915 --ssl-cacert-file file
916 (only webdav & lftp backend) Provide a cacert file for ssl
917 certificate verification.
918
919 See also A NOTE ON SSL CERTIFICATE VERIFICATION.
920
921 --ssl-cacert-path path/to/certs/
922 (only webdav backend and python 2.7.9+ OR lftp+webdavs and a
923 recent lftp) Provide a path to a folder containing cacert files
924 for ssl certificate verification.
925
926 See also A NOTE ON SSL CERTIFICATE VERIFICATION.
927
928 --ssl-no-check-certificate
929 (only webdav & lftp backend) Disable ssl certificate
930 verification.
931
932 See also A NOTE ON SSL CERTIFICATE VERIFICATION.
933
934 --swift-storage-policy
935 Use this storage policy when operating on Swift containers.
936
937 See also A NOTE ON SWIFT (OPENSTACK OBJECT STORAGE) ACCESS.
938
939 --metadata-sync-mode mode
940 This option defaults to 'partial', but you can set it to 'full'
941
942 Use 'partial' to avoid syncing metadata for backup chains that
943 you are not going to use. This saves time when restoring for
944 the first time, and lets you restore an old backup that was
945 encrypted with a different passphrase by supplying only the
946 target passphrase.
947
948 Use 'full' to sync metadata for all backup chains on the remote.
949
950 --tempdir directory
951 Use this existing directory for duplicity temporary files
952 instead of the system default, which is usually the /tmp
953 directory. This option supersedes any environment variable.
954
955 See also ENVIRONMENT VARIABLES.
956
957 -t time, --time time, --restore-time time
958 Specify the time from which to restore or list files.
959
960 See section TIME FORMATS for details.
961
962 --time-separator char
963 Use char as the time separator in filenames instead of colon
964 (":").
965
966 NOTE: This option only applies to recovery and status style
967 actions. We no longer create or write filenames with time
968 separators, but will read older backups that may need this
969 option.
970
971 --timeout seconds
972 Use seconds as the socket timeout value if duplicity begins to
973 timeout during network operations. The default is 30 seconds.
974
975 --use-agent
976 If this option is specified, then --use-agent is passed to the
977 GnuPG encryption process and it will try to connect to gpg-agent
978 before it asks for a passphrase for --encrypt-key or --sign-key
979 if needed.
980
981 NOTE: Contrary to previous versions of duplicity, this option
982 will also be honored by GnuPG 2 and newer versions. If GnuPG 2
983 is in use, duplicity passes the option --pinentry-mode=loopback
984 to the the gpg process unless --use-agent is specified on the
985 duplicity command line. This has the effect that GnuPG 2 uses
986 the agent only if --use-agent is given, just like GnuPG 1.
987
988 --verbosity level, -vlevel
989 Specify output verbosity level (log level). Named levels and
990 corresponding values are 0 Error, 2 Warning, 4 Notice (default),
991 8 Info, 9 Debug (noisiest).
992 level may also be
993 a character: e, w, n, i, d
994 a word: error, warning, notice, info, debug
995
996 The options -v4, -vn and -vnotice are functionally equivalent,
997 as are the mixed/upper-case versions -vN, -vNotice and -vNOTICE.
998
999 --version
1000 Print duplicity's version and quit.
1001
1002 --volsize number
1003 Change the volume size to number MB. Default is 200MB.
1004
1005 --webdav-headers csv formatted key,value pairs
1006 The input format is comma separated list of key,value pairs.
1007 Standard CSV encoding may be used.
1008
1009 For example to set a Cookie use 'Cookie,name=value', or
1010 '"Cookie","name=value"'.
1011
1012 You can set multiple headers, e.g.
1013 '"Cookie","name=value","Authorization","xxx"'.
1014
1016 TMPDIR, TEMP, TMP
1017 In decreasing order of importance, specifies the directory to
1018 use for temporary files (inherited from Python's tempfile
1019 module). Eventually the option --tempdir supersedes any of
1020 these.
1021
1022 FTP_PASSWORD
1023 Supported by most backends which are password capable. More
1024 secure than setting it in the backend url (which might be
1025 readable in the operating systems process listing to other users
1026 on the same machine).
1027
1028 PASSPHRASE
1029 This passphrase is passed to GnuPG. If this is not set, the user
1030 will be prompted for the passphrase. GPG uses the AES
1031 encryption method for passphrase encryption.
1032
1033 SIGN_PASSPHRASE
1034 The passphrase to be used for --sign-key. If omitted and sign
1035 key is also one of the keys to encrypt against PASSPHRASE will
1036 be reused instead. Otherwise, if passphrase is needed but not
1037 set the user will be prompted for it. GPG uses the AES
1038 encryption method for passphrase encryption.
1039
1040 Other environment variables may be used to configure specific
1041 backends. See the notes for the particular backend.
1042
1044 Duplicity uses the URL format (as standard as possible) to define data
1045 locations. Major difference is that the whole host section is optional
1046 for some backends.
1047 NOTE: If path starts with an extra '/' it usually denotes an absolute
1048 path on the backend.
1049
1050 The generic format for a URL is:
1051
1052 scheme://[[user[:password]@]host[:port]/][/]path
1053
1054 or
1055
1056 scheme://[/]path
1057
1058 It is not recommended to expose the password on the command line since
1059 it could be revealed to anyone with permissions to do process listings,
1060 it is permitted however. Consider setting the environment variable
1061 FTP_PASSWORD instead, which is used by most, if not all backends,
1062 regardless of it's name.
1063
1064 In protocols that support it, the path may be preceded by a single
1065 slash, '/path', to represent a relative path to the target home
1066 directory, or preceded by a double slash, '//path', to represent an
1067 absolute filesystem path.
1068
1069 NOTE: Scheme (protocol) access may be provided by more than one
1070 backend. In case the default backend is buggy or simply not working in
1071 a specific case it might be worth trying an alternative implementation.
1072 Alternative backends can be selected by prefixing the scheme with the
1073 name of the alternative backend e.g. ncftp+ftp:// and are mentioned
1074 below the scheme's syntax summary.
1075
1076 Formats of each of the URL schemes follow:
1077
1078 Amazon Drive Backend
1079 ad://some_dir
1080
1081 See also A NOTE ON AMAZON DRIVE
1082
1083 Azure
1084 azure://container-name
1085
1086 See also A NOTE ON AZURE ACCESS
1087
1088 B2
1089 b2://account_id[:application_key]@bucket_name/[folder/]
1090
1091 Box
1092 box:///some_dir[?config=path_to_config]
1093
1094 See also A NOTE ON BOX ACCESS
1095
1096 Cloud Files (Rackspace)
1097 cf+http://container_name
1098
1099 See also A NOTE ON CLOUD FILES ACCESS
1100
1101 Dropbox
1102 dpbx:///some_dir
1103
1104 Make sure to read A NOTE ON DROPBOX ACCESS first!
1105
1106 File (local file system)
1107 file://[relative|/absolute]/local/path
1108
1109 FISH (Files transferred over Shell protocol) over ssh
1110 fish://user[:password]@other.host[:port]/[relative|/absolute]_path
1111
1112 FTP
1113 ftp[s]://user[:password]@other.host[:port]/some_dir
1114
1115 NOTE: use lftp+, ncftp+ prefixes to enforce a specific backend,
1116 default is lftp+ftp://...
1117
1118 Google Cloud Storage (GCS via Interoperable Access)
1119 s3://bucket[/path]
1120
1121 See A NOTE ON GOOGLE CLOUD STORAGE about needed endpoint option
1122 and env vars for authentication.
1123
1124 Google Docs
1125 gdocs://user[:password]@other.host/some_dir
1126
1127 NOTE: use pydrive+, gdata+ prefixes to enforce a specific
1128 backend, default is pydrive+gdocs://...
1129
1130 Google Drive
1131
1132 gdrive://<service account' email
1133 address>@developer.gserviceaccount.com/some_dir
1134
1135 See also A NOTE ON GDRIVE BACKEND below.
1136
1137 HSI
1138 hsi://user[:password]@other.host/some_dir
1139
1140 hubiC
1141 cf+hubic://container_name
1142
1143 See also A NOTE ON HUBIC
1144
1145 IMAP email storage
1146 imap[s]://user[:password]@host.com[/from_address_prefix]
1147
1148 See also A NOTE ON IMAP
1149
1150 MediaFire
1151 mf://user[:password]@mediafire.com/some_dir
1152
1153 See also A NOTE ON MEDIAFIRE BACKEND below.
1154
1155 MEGA.nz cloud storage (only works for accounts created prior to
1156 November 2018, uses "megatools")
1157 mega://user[:password]@mega.nz/some_dir
1158
1159 NOTE: if not given in the URL, relies on password being stored
1160 within $HOME/.megarc (as used by the "megatools" utilities)
1161
1162 MEGA.nz cloud storage (works for all MEGA accounts, uses "MEGAcmd"
1163 tools)
1164 megav2://user[:password]@mega.nz/some_dir
1165 megav3://user[:password]@mega.nz/some_dir[?no_logout=1] (For
1166 latest MEGAcmd)
1167
1168 NOTE: despite "MEGAcmd" no longer uses a configuration file, for
1169 convenience storing the user password this backend searches it
1170 in the $HOME/.megav2rc file (same syntax as the old
1171 $HOME/.megarc)
1172 [Login]
1173 Username = MEGA_USERNAME
1174 Password = MEGA_PASSWORD
1175
1176 multi
1177 multi:///path/to/config.json
1178
1179 See also A NOTE ON MULTI BACKEND below.
1180
1181 OneDrive Backend
1182 onedrive://some_dir See also A NOTE ON ONEDRIVE BACKEND
1183
1184 Par2 Wrapper Backend
1185 par2+scheme://[user[:password]@]host[:port]/[/]path
1186
1187 See also A NOTE ON PAR2 WRAPPER BACKEND
1188
1189 Public Cloud Archive (OVH)
1190 pca://container_name[/prefix]
1191
1192 See also A NOTE ON PCA ACCESS
1193
1194 pydrive
1195 pydrive://<service account' email
1196 address>@developer.gserviceaccount.com/some_dir
1197
1198 See also A NOTE ON PYDRIVE BACKEND below.
1199
1200 Rclone Backend
1201 rclone://remote:/some_dir
1202
1203 See also A NOTE ON RCLONE BACKEND
1204
1205 Rsync via daemon
1206 rsync://user[:password]@host.com[:port]::[/]module/some_dir
1207
1208 Rsync over ssh (only key auth)
1209 rsync://user@host.com[:port]/[relative|/absolute]_path
1210
1211 S3 storage (Amazon)
1212 s3:///bucket_name[/path]
1213
1214 See also A NOTE ON AMAZON S3 below.
1215
1216 SCP/SFTP Secure Copy Protocol/SSH File Transfer Protocol
1217 scp://.. or
1218 sftp://user[:password]@other.host[:port]/[relative|/absolute]_path
1219
1220 defaults are paramiko+scp:// and paramiko+sftp://
1221 alternatively try pexpect+scp://, pexpect+sftp://, lftp+sftp://
1222 See also --ssh-askpass, --ssh-options and A NOTE ON SSH
1223 BACKENDS.
1224
1225 slate
1226 slate://[slate-id]
1227
1228 See also A NOTE ON SLATE BACKEND
1229
1230 Swift (Openstack)
1231 swift://container_name[/prefix]
1232
1233 See also A NOTE ON SWIFT (OPENSTACK OBJECT STORAGE) ACCESS
1234
1235 Tahoe-LAFS
1236 tahoe://alias/directory
1237
1238 WebDAV
1239 webdav[s]://user[:password]@other.host[:port]/some_dir
1240
1241 alternatively try lftp+webdav[s]://
1242
1243 Optical media (ISO9660 CD/DVD/Bluray using xorriso)
1244 xorriso:///dev/byOpticalDrive[:/path/to/directory/on/iso]
1245 xorriso:///path/to/image.iso[:/path/to/directory/on/iso]
1246
1247
1248 See also A NOTE ON THE XORRISO BACKEND
1249
1251 duplicity uses time strings in two places. Firstly, many of the files
1252 duplicity creates will have the time in their filenames in the w3
1253 datetime format as described in a w3 note at http://www.w3.org/TR/NOTE-
1254 datetime. Basically they look like "2001-07-15T04:09:38-07:00", which
1255 means what it looks like. The "-07:00" section means the time zone is
1256 7 hours behind UTC.
1257 Secondly, the -t, --time, and --restore-time options take a time
1258 string, which can be given in any of several formats:
1259 1. the string "now" (refers to the current time)
1260 2. a sequences of digits, like "123456890" (indicating the time in
1261 seconds after the epoch)
1262 3. A string like "2002-01-25T07:00:00+02:00" in datetime format
1263 4. An interval, which is a number followed by one of the characters
1264 s, m, h, D, W, M, or Y (indicating seconds, minutes, hours,
1265 days, weeks, months, or years respectively), or a series of such
1266 pairs. In this case the string refers to the time that preceded
1267 the current time by the length of the interval. For instance,
1268 "1h78m" indicates the time that was one hour and 78 minutes ago.
1269 The calendar here is unsophisticated: a month is always 30 days,
1270 a year is always 365 days, and a day is always 86400 seconds.
1271 5. A date format of the form YYYY/MM/DD, YYYY-MM-DD, MM/DD/YYYY, or
1272 MM-DD-YYYY, which indicates midnight on the day in question,
1273 relative to the current time zone settings. For instance,
1274 "2002/3/5", "03-05-2002", and "2002-3-05" all mean March 5th,
1275 2002.
1276
1278 When duplicity is run, it searches through the given source directory
1279 and backs up all the files specified by the file selection system,
1280 unless --files-from has been specified in which case the passed list of
1281 individual files is used instead.
1282
1283 The file selection system comprises a number of file selection
1284 conditions, which are set using one of the following command line
1285 options:
1286
1287 --exclude
1288 --exclude-device-files
1289 --exclude-if-present
1290 --exclude-filelist
1291 --exclude-regexp
1292 --include
1293 --include-filelist
1294 --include-regexp
1295
1296 For each individual file found in the source directory, the file
1297 selection conditions are checked in the order they are specified on the
1298 command line. Should a selection condition match, the file will be
1299 included or excluded accordingly and the file selection system will
1300 proceed to the next file without checking the remaining conditions.
1301
1302 Earlier arguments therefore take precedence where multiple conditions
1303 match any given file, and are thus usually given in order of decreasing
1304 specificity. If no selection conditions match a given file, then the
1305 file is implicitly included.
1306
1307 For example,
1308
1309 duplicity --include /usr --exclude /usr /usr
1310 scp://user@host/backup
1311
1312 is exactly the same as
1313
1314 duplicity /usr scp://user@host/backup
1315
1316 because the --include directive matches all files in the backup source
1317 directory, and takes precedence over the contradicting --exclude option
1318 as it comes first.
1319
1320 As a more meaningful example,
1321
1322 duplicity --include /usr/local/bin --exclude /usr/local /usr
1323 scp://user@host/backup
1324
1325 would backup the /usr/local/bin directory (and its contents), but not
1326 /usr/local/doc. Note that this is not the same as simply specifying
1327 /usr/local/bin as the backup source, as other files and folders under
1328 /usr will also be (implicitly) included.
1329
1330 The order of the --include and --exclude arguments is important. In the
1331 previous example, if the less specific --exclude directive had
1332 precedence it would prevent the more specific --include from matching
1333 any files.
1334
1335 The patterns passed to the --include, --exclude, --include-filelist,
1336 and --exclude-filelist options are interpretted as extended shell
1337 globbing patterns by default. This behaviour can be changed with the
1338 following filter mode arguments:
1339
1340 --filter-globbing
1341 --filter-literal
1342 --filter-regexp
1343
1344 These arguments change the interpretation of the patterns used in
1345 selection conditions, affecting all subsequent file selection options
1346 passed on the command line. They may be specified multiple times in
1347 order to switch pattern interpretations as needed.
1348
1349 Literal strings differ from globs in that the pattern must match the
1350 filename exactly. This can be useful where filenames contain characters
1351 which have special meaning in shell globs or regular expressions. If
1352 passing dynamically generated file lists to duplicity using the
1353 --include-filelist or --exclude-filelist options, then the use of
1354 --filter-literal is recommended unless regular expression or globbing
1355 is specifically required.
1356
1357 The regular expression language used for selection conditions specified
1358 with --include-regexp , --exclude-regexp , or when --filter-regexp is
1359 in effect is as implemented by the Python standard library.
1360
1361 Extended shell globbing pattenrs may contain: *, **, ?, and [...]
1362 (character ranges). As in a normal shell, * can be expanded to any
1363 string of characters not containing "/", ? expands to any single
1364 character except "/", and [...] expands to a single character of those
1365 characters specified (ranges are acceptable). The pattern ** expands
1366 to any string of characters whether or not it contains "/".
1367
1368 In addition to the above filter mode arguments, the following can be
1369 used in the same fashion to enable (default) or disable case
1370 sensitivity in the evaluation of file sslection conditions:
1371
1372 --filter-ignorecase
1373 --filter-strictcase
1374
1375 An example of filter mode switching including case insensitivity is
1376
1377 --filter-ignorecase --include /usr/bin/*.PY --filter-literal
1378 --filter-include /usr/bin/special?file*name --filter-strictcase
1379 --exclude /usr/bin
1380
1381 which would backup *.py, *.pY, *.Py, and *.PY files under /usr/bin and
1382 also the single literally specified file with globbing characters in
1383 the name. The use of --filter-strictcase is not technically necessary
1384 here, but is included as an example which may (depending on the backup
1385 source path) cause unexpected interactions between --include and
1386 --exclude options, should the directory portion of the path (/usr/bin)
1387 contain any uppercase characters.
1388
1389 If the pattern starts with "ignorecase:" (case insensitive), then this
1390 prefix will be removed and any character in the string can be replaced
1391 with an upper- or lowercase version of itself. This prefix is a legacy
1392 feature supported for shell globbing selection conditions only, but for
1393 backward compatability reasons is otherwise considered part of the
1394 pattern itself (use --filter-ignorecase instead).
1395
1396 Remember that you may need to quote patterns when typing them into a
1397 shell, so the shell does not interpret the globbing patterns or
1398 whitespace characters before duplicity sees them.
1399
1400 Selection patterns should generally be thought of as filesystem paths
1401 rather than arbitrary strings. For selection conditions using extended
1402 shell globbing patterns, the --exclude pattern option matches a file
1403 if:
1404
1405 1. pattern can be expanded into the file's filename, or
1406 2. the file is inside a directory matched by the option.
1407
1408 Conversely, the --include pattern option matches a file if:
1409
1410 1. pattern can be expanded into the file's filename, or
1411 2. the file is inside a directory matched by the option, or
1412 3. the file is a directory which contains a file matched by the
1413 option.
1414
1415 For example,
1416
1417 --exclude /usr/local
1418
1419 matches e.g. /usr/local, /usr/local/lib, and /usr/local/lib/netscape.
1420 It is the same as --exclude /usr/local --exclude '/usr/local/**'.
1421 On the other hand
1422
1423 --include /usr/local
1424
1425 specifies that /usr, /usr/local, /usr/local/lib, and
1426 /usr/local/lib/netscape (but not /usr/doc) all be backed up. Thus you
1427 don't have to worry about including parent directories to make sure
1428 that included subdirectories have somewhere to go.
1429
1430 Finally,
1431
1432 --include ignorecase:'/usr/[a-z0-9]foo/*/**.py'
1433
1434 would match a file like /usR/5fOO/hello/there/world.py. If it did
1435 match anything, it would also match /usr. If there is no existing file
1436 that the given pattern can be expanded into, the option will not match
1437 /usr alone.
1438
1439 This treatment of patterns in globbing and literal selection conditions
1440 as filesystem paths reduces the number of explicit conditions required.
1441 However, it does require that the paths described by all variants of
1442 the --include or --include option are fully specified relative to the
1443 backup source directory.
1444
1445 For selection conditions using literal strings, the same logic applies
1446 except that scenario 1 is for an exact match of the pattern.
1447
1448 For selection conditions using regular expressions the pattern is
1449 evaluated as a regular expression rather than a filesystem path.
1450 Scenario 3 in the above therefore does not apply, the implications of
1451 which are discussed at the end of this section.
1452
1453 The --include-filelist, and --exclude-filelist, options also introduce
1454 file selection conditions. They direct duplicity to read in a text
1455 file (either ASCII or UTF-8), each line of which is a file
1456 specification, and to include or exclude the matching files. Lines are
1457 separated by newlines or nulls, depending on whether the --null-
1458 separator switch was given.
1459
1460 Each line in the filelist will be interpreted as a selection pattern in
1461 the same way --include and --exclude options are interpreted, except
1462 that lines starting with "+ " are interpreted as include directives,
1463 even if found in a filelist referenced by --exclude-filelist.
1464 Similarly, lines starting with "- " exclude files even if they are
1465 found within an include filelist.
1466
1467 For example, if file "list.txt" contains the lines:
1468
1469 /usr/local
1470 - /usr/local/doc
1471 /usr/local/bin
1472 + /var
1473 - /var
1474
1475 then --include-filelist list.txt would include /usr, /usr/local, and
1476 /usr/local/bin. It would exclude /usr/local/doc,
1477 /usr/local/doc/python, etc. It would also include /usr/local/man, as
1478 this is included within /usr/local. Finally, it is undefined what
1479 happens with /var. A single file list should not contain conflicting
1480 file specifications.
1481
1482 Each line in the filelist will be interpreted as per the current filter
1483 mode in the same way --include and --exclude options are interpreted.
1484 For instance, if the file "list.txt" contains the lines:
1485
1486 dir/foo
1487 + dir/bar
1488 - **
1489
1490 Then --include-filelist list.txt would be exactly the same as
1491 specifying --include dir/foo --include dir/bar --exclude ** on the
1492 command line.
1493
1494 Note that specifying very large numbers numbers of selection rules as
1495 filelists can incur a substantial performance penalty as these rules
1496 will (potentially) be checked for every file in the backup source
1497 directory. If you need to backup arbitrary lists of specific files
1498 (i.e. not described by regexp patterns or shell globs) then --files-
1499 from is likely to be more performant.
1500
1501 Finally, the --include-regexp and --exclude-regexp options allow files
1502 to be included and excluded if their filenames match a regular
1503 expression. Regular expression syntax is too complicated to explain
1504 here, but is covered in Python's library reference. Unlike the
1505 --include and --exclude options, the regular expression options don't
1506 match files containing or contained in matched files. So for instance
1507
1508 --include-regexp '[0-9]{7}(?!foo)'
1509
1510 matches any files whose full pathnames contain 7 consecutive digits
1511 which aren't followed by 'foo'. However, it wouldn't match /home even
1512 if /home/ben/1234567 existed.
1513
1515 1. The API Keys used for Amazon Drive have not been granted
1516 production limits. Amazon do not say what the development
1517 limits are and are not replying to to requests to whitelist
1518 duplicity. A related tool, acd_cli, was demoted to development
1519 limits, but continues to work fine except for cases of excessive
1520 usage. If you experience throttling and similar issues with
1521 Amazon Drive using this backend, please report them to the
1522 mailing list.
1523 2. If you previously used the acd+acdcli backend, it is strongly
1524 recommended to update to the ad backend instead, since it
1525 interfaces directly with Amazon Drive. You will need to setup
1526 the OAuth once again, but can otherwise keep your backups and
1527 config.
1528
1530 Backing up to Amazon S3 utilizes the boto3 library.
1531
1532 The boto3 backend does not support bucket creation. This deliberate
1533 choice simplifies the code, and side steps problems related to region
1534 selection. Additionally, it is probably not a good practice to give
1535 your backup role bucket creation rights. In most cases the role used
1536 for backups should probably be limited to specific buckets.
1537
1538 The boto3 backend only supports newer domain style buckets. Amazon is
1539 moving to deprecate the older bucket style, so migration is
1540 recommended.
1541
1542 The boto3 backend does not currently support initiating restores from
1543 the glacier storage class. When restoring a backup from glacier or
1544 glacier deep archive, the backup files must first be restored out of
1545 band. There are multiple options when restoring backups from cold
1546 storage, which vary in both cost and speed. See Amazon's documentation
1547 for details.
1548
1549 The following environment variables are required for authentication:
1550 AWS_ACCESS_KEY_ID (required),
1551 AWS_SECRET_ACCESS_KEY (required)
1552 or
1553 BOTO_CONFIG (required) pointing to a boto config file.
1554 For simplicity's sake we will document the use of the AWS_* vars only.
1555 Research boto3 documentation available in the web if you want to use
1556 the config file.
1557
1558 boto3 backend example backup command line:
1559
1560 AWS_ACCESS_KEY_ID=<key_id> AWS_SECRET_ACCESS_KEY=<access_key>
1561 duplicity /some/path s3:///bucket/subfolder
1562
1563 you may add --s3-endpoint-url (to access non Amazon S3 services or
1564 regional endpoints) and may need --s3-region-name (for buckets created
1565 in specific regions) and other --s3-... options documented above.
1566
1568 The Azure backend requires the Microsoft Azure Storage Blobs client
1569 library for Python to be installed on the system. See REQUIREMENTS.
1570
1571 It uses the environment variable AZURE_CONNECTION_STRING (required).
1572 This string contains all necessary information such as Storage Account
1573 name and the key for authentication. You can find it under Access Keys
1574 for the storage account.
1575
1576 Duplicity will take care to create the container when performing the
1577 backup. Do not create it manually before.
1578
1579 A container name (as given as the backup url) must be a valid DNS name,
1580 conforming to the following naming rules:
1581
1582 1. Container names must start with a letter or number, and
1583 can contain only letters, numbers, and the dash (-)
1584 character.
1585 2. Every dash (-) character must be immediately preceded and
1586 followed by a letter or number; consecutive dashes are
1587 not permitted in container names.
1588 3. All letters in a container name must be lowercase.
1589 4. Container names must be from 3 through 63 characters
1590 long.
1591
1592 These rules come from Azure; see https://docs.microsoft.com/en-
1593 us/rest/api/storageservices/naming-and-referencing-
1594 containers--blobs--and-metadata
1595
1597 The box backend requires boxsdk with jwt support to be installed on the
1598 system. See REQUIREMENTS.
1599
1600 It uses the environment variable BOX_CONFIG_PATH (optional). This
1601 string contains the path to box custom app's config.json. Either this
1602 environment variable or the config query parameter in the url need to
1603 be specified, if both are specified, query parameter takes precedence.
1604
1605 Create a Box custom app
1606 In order to use box backend, user need to create a box custom app in
1607 the box developer console (https://app.box.com/developers/console).
1608
1609 After create a new custom app, please make sure it is configured as
1610 follow:
1611
1612 1. Choose "App Access Only" for "App Access Level"
1613 2. Check "Write all files and folders stored in Box"
1614 3. Generate a Public/Private Keypair
1615
1616 The user also need to grant the created custom app permission in the
1617 admin console (https://app.box.com/master/custom-apps) by clicking the
1618 "+" button and enter the client_id which can be found on the custom
1619 app's configuration page.
1620
1622 Pyrax is Rackspace's next-generation Cloud management API, including
1623 Cloud Files access. The cfpyrax backend requires the pyrax library to
1624 be installed on the system. See REQUIREMENTS.
1625
1626 Cloudfiles is Rackspace's now deprecated implementation of OpenStack
1627 Object Storage protocol. Users wishing to use Duplicity with Rackspace
1628 Cloud Files should migrate to the new Pyrax plugin to ensure support.
1629
1630 The backend requires python-cloudfiles to be installed on the system.
1631 See REQUIREMENTS.
1632
1633 It uses three environment variables for authentication:
1634 CLOUDFILES_USERNAME (required), CLOUDFILES_APIKEY (required),
1635 CLOUDFILES_AUTHURL (optional)
1636
1637 If CLOUDFILES_AUTHURL is unspecified it will default to the value
1638 provided by python-cloudfiles, which points to rackspace, hence this
1639 value must be set in order to use other cloud files providers.
1640
1642 1. First of all Dropbox backend requires valid authentication
1643 token. It should be passed via DPBX_ACCESS_TOKEN environment
1644 variable.
1645 To obtain it please create 'Dropbox API' application at:
1646 https://www.dropbox.com/developers/apps/create
1647 Then visit app settings and just use 'Generated access token'
1648 under OAuth2 section.
1649 Alternatively you can let duplicity generate access token
1650 itself. In such case temporary export DPBX_APP_KEY ,
1651 DPBX_APP_SECRET using values from app settings page and run
1652 duplicity interactively.
1653 It will print the URL that you need to open in the browser to
1654 obtain OAuth2 token for the application. Just follow on-screen
1655 instructions and then put generated token to DPBX_ACCESS_TOKEN
1656 variable. Once done, feel free to unset DPBX_APP_KEY and
1657 DPBX_APP_SECRET
1658
1659 2. "some_dir" must already exist in the Dropbox folder. Depending
1660 on access token kind it may be:
1661 Full Dropbox: path is absolute and starts from 'Dropbox'
1662 root folder.
1663 App Folder: path is related to application folder.
1664 Dropbox client will show it in ~/Dropbox/Apps/<app-name>
1665
1666 3. When using Dropbox for storage, be aware that all files,
1667 including the ones in the Apps folder, will be synced to all
1668 connected computers. You may prefer to use a separate Dropbox
1669 account specially for the backups, and not connect any computers
1670 to that account. Alternatively you can configure selective sync
1671 on all computers to avoid syncing of backup files
1672
1674 Filename prefixes can be used in multi backend with mirror mode to
1675 define affinity rules. They can also be used in conjunction with S3
1676 lifecycle rules to transition archive files to Glacier, while keeping
1677 metadata (signature and manifest files) on S3.
1678
1679 Duplicity does not require access to archive files except when
1680 restoring from backup.
1681
1683 Overview
1684 Duplicity access to GCS currently relies on it's Interoperability API
1685 (basically S3 for GCS). This needs to actively be enabled before
1686 access is possible. For details read the next section Preparations
1687 below.
1688
1689 Preparations
1690 1. login on https://console.cloud.google.com/
1691 2. go to Cloud Storage->Settings->Interoperability
1692 3. create a Service account (if needed)
1693 4. create Service account HMAC access key and secret (!!instantly
1694 copy!! the secret, it can NOT be recovered later)
1695 5. go to Cloud Storage->Browser
1696 6. create a bucket
1697 7. add permissions for Service account that was used to set up
1698 Interoperability access above
1699
1700 Once set up you can use the generated Interoperable Storage Access key
1701 and secret and pass them to duplicity as described in the next section.
1702
1703 Usage
1704 The following examples show accessing GCS via S3 for a collection-
1705 status action. The shown env vars, options and url format can be
1706 applied for all other actions as well of course.
1707
1708 using boto3 supplying the --s3-endpoint-url manually.
1709
1710 AWS_ACCESS_KEY_ID=<keyid> AWS_SECRET_ACCESS_KEY=<secret>
1711 duplicity collection-status s3:///<bucket>/<folder>
1712 --s3-endpoint-url=https://storage.googleapis.com
1713
1715 GDrive: is a rewritten PyDrive: backend with less dependencies, and a
1716 simpler setup - it uses the JSON keys downloaded directly from Google
1717 Cloud Console.
1718
1719 Note Google has 2 drive methods, `Shared(previously Team) Drives` and
1720 `My Drive`, both can be shared but require different addressing
1721
1722 For a Google Shared Drives folder
1723
1724 Share Drive ID specified as a query parameter, driveID, in the backend
1725 URL. Example:
1726 gdrive://developer.gserviceaccount.com/target-
1727 folder/?driveID=<SHARED DRIVE ID>
1728
1729 For a Google My Drive based shared folder
1730
1731 MyDrive folder ID specified as a query parameter, myDriveFolderID, in
1732 the backend URL Example
1733 export GOOGLE_SERVICE_ACCOUNT_URL=<serviceaccount-
1734 name>@<serviceaccount-name>.iam.gserviceaccount.com
1735 gdrive://${GOOGLE_SERVICE_ACCOUNT_URL}/<target-folder-name-in-
1736 myDriveFolder>?myDriveFolderID=root
1737
1738
1739 There are also two ways to authenticate to use GDrive: with a regular
1740 account or with a "service account". With a service account, a separate
1741 account is created, that is only accessible with Google APIs and not a
1742 web login. With a regular account, you can store backups in your
1743 normal Google Drive.
1744
1745 To use a service account, go to the Google developers console at
1746 https://console.developers.google.com. Create a project, and make sure
1747 Drive API is enabled for the project. In the "Credentials" section,
1748 click "Create credentials", then select Service Account with JSON key.
1749
1750 The GOOGLE_SERVICE_JSON_FILE environment variable needs to contain the
1751 path to the JSON file on duplicity invocation.
1752
1753 export GOOGLE_SERVICE_JSON_FILE=<path-to-serviceaccount-
1754 credentials.json>
1755
1756
1757 The alternative is to use a regular account. To do this, start as
1758 above, but when creating a new Client ID, select "Create OAuth client
1759 ID", with application type of "Desktop app". Download the
1760 client_secret.json file for the new client, and set the
1761 GOOGLE_CLIENT_SECRET_JSON_FILE environment variable to the path to this
1762 file, and GOOGLE_CREDENTIALS_FILE to a path to a file where duplicity
1763 will keep the authentication token - this location must be writable.
1764
1765 NOTE: As a sanity check, GDrive checks the host and username from the
1766 URL against the JSON key, and refuses to proceed if the addresses do
1767 not match. Either the email (for the service accounts) or Client ID
1768 (for regular OAuth accounts) must be present in the URL. See URL FORMAT
1769 above.
1770
1771 First run / OAuth 2.0 authorization
1772 During the first run, you will be prompted to visit an URL in your
1773 browser to grant access to your Google Drive. A temporary HTTP-service
1774 will be started on a local network interface for this purpose (by
1775 default on http://localhost:8080/). Ip-address/host and port can be
1776 adjusted if need be by providing the environment variables
1777 GOOGLE_OAUTH_LOCAL_SERVER_HOST, GOOGLE_OAUTH_LOCAL_SERVER_PORT
1778 respectively.
1779
1780 If you are running duplicity in a remote location, you will need to
1781 make sure that you will be able to access the above HTTP-service with a
1782 browser utilizing e.g. port forwarding or temporary firewall
1783 permission.
1784
1785 The access credentials will be saved in the JSON file mentioned above
1786 for future use after a successful authorization.
1787
1789 The hubic backend requires the pyrax library to be installed on the
1790 system. See REQUIREMENTS. You will need to set your credentials for
1791 hubiC in a file called ~/.hubic_credentials, following this pattern:
1792 [hubic]
1793 email = your_email
1794 password = your_password
1795 client_id = api_client_id
1796 client_secret = api_secret_key
1797 redirect_uri = http://localhost/
1798
1800 An IMAP account can be used as a target for the upload. The userid may
1801 be specified and the password will be requested.
1802 The from_address_prefix may be specified (and probably should be). The
1803 text will be used as the "From" address in the IMAP server. Then on a
1804 restore (or list) action the from_address_prefix will distinguish
1805 between different backups.
1806
1808 This backend requires mediafire python library to be installed on the
1809 system. See REQUIREMENTS.
1810
1811 Use URL escaping for username (and password, if provided via command
1812 line):
1813
1814 mf://duplicity%40example.com@mediafire.com/some_folder
1815 The destination folder will be created for you if it does not exist.
1816
1818 The multi backend allows duplicity to combine the storage available in
1819 more than one backend store (e.g., you can store across a google drive
1820 account and a onedrive account to get effectively the combined storage
1821 available in both). The URL path specifies a JSON formatted config
1822 file containing a list of the backends it will use. The URL may also
1823 specify "query" parameters to configure overall behavior. Each element
1824 of the list must have a "url" element, and may also contain an optional
1825 "description" and an optional "env" list of environment variables used
1826 to configure that backend.
1827 Query Parameters
1828 Query parameters come after the file URL in standard HTTP format for
1829 example:
1830 multi:///path/to/config.json?mode=mirror&onfail=abort
1831 multi:///path/to/config.json?mode=stripe&onfail=continue
1832 multi:///path/to/config.json?onfail=abort&mode=stripe
1833 multi:///path/to/config.json?onfail=abort
1834 Order does not matter, however unrecognized parameters are considered
1835 an error.
1836
1837 mode=stripe
1838 This mode (the default) performs round-robin access to the list
1839 of backends. In this mode, all backends must be reliable as a
1840 loss of one means a loss of one of the archive files.
1841
1842 mode=mirror
1843 This mode accesses backends as a RAID1-store, storing every file
1844 in every backend and reading files from the first-successful
1845 backend. A loss of any backend should result in no failure.
1846 Note that backends added later will only get new files and may
1847 require a manual sync with one of the other operating ones.
1848
1849 onfail=continue
1850 This setting (the default) continues all write operations in as
1851 best-effort. Any failure results in the next backend tried.
1852 Failure is reported only when all backends fail a given
1853 operation with the error result from the last failure.
1854
1855 onfail=abort
1856 This setting considers any backend write failure as a
1857 terminating condition and reports the error. Data reading and
1858 listing operations are independent of this and will try with the
1859 next backend on failure.
1860 JSON File Example
1861 [
1862 {
1863 "description": "a comment about the backend"
1864 "url": "abackend://myuser@domain.com/backup",
1865 "env": [
1866 {
1867 "name" : "MYENV",
1868 "value" : "xyz"
1869 },
1870 {
1871 "name" : "FOO",
1872 "value" : "bar"
1873 }
1874 ],
1875 "prefixes": ["prefix1_", "prefix2_"]
1876 },
1877 {
1878 "url": "file:///path/to/dir"
1879 }
1880 ]
1881
1883 onedrive:// works with both personal and business onedrive as well as
1884 sharepoint drives. On first use you be provided with an URL to with a
1885 microsoft account. Open it in your web browser.
1886
1887 After authenticating, copy the redirected URL back to duplicity.
1888 Duplicity will fetch a token and store it in
1889 ~/.duplicity_onedrive_oauthtoken.json. This location can be overridden
1890 by setting the DUPLICITY_ONEDRIVE_TOKEN environment variable.
1891
1892 Duplicity uses a default App ID registered with Microsoft Azure AD. It
1893 will need to be approved by an administrator of your Office365 Tenant
1894 on a business account.
1895
1896 Register and set your own microsoft app id
1897 1. visit https://portal.azure.com
1898
1899 2. Choose "Enterprise Applications", then "Create your own
1900 Application"
1901
1902 3. Input your application name and select "Register an application
1903 to integrate with Azure AD".
1904
1905 4. Continue to the next page and set the redirect uri to
1906 "https://login.microsoftonline.com/common/oauth2/nativeclient",
1907 choosing "Public client/native" from the dropdown. Click create.
1908
1909 5. Find the application id in "Enterprise Applications" and set the
1910 environment variable DUPLICITY_ONEDRIVE_CLIENT_ID to it.
1911
1912 More information on Microsoft Apps at:
1913 https://learn.microsoft.com/en-us/azure/active-
1914 directory/develop/quickstart-register-app
1915
1916 Backup to a sharepoint site instead of onedrive
1917 to use a sharepoint site you need to find and provide the site's tenant
1918 and site id.
1919
1920 1. Login with your Microsoft Account at
1921 https://<o365_tenant>.sharepoint.com/
1922
1923 2. Navigate to
1924 https://<o365_tenant>.sharepoint.com/sites/<path_to_site>/_api/site/id
1925
1926 3. Copy the disyplayed UUID (site_id) and set the
1927 DUPLICITY_ONEDRIVE_ROOT environment variable to
1928 "sites/<o365_tenant>.sharepoint.com,<site_id>/drive"
1929
1931 Par2 Wrapper Backend can be used in combination with all other backends
1932 to create recovery files. Just add par2+ before a regular scheme (e.g.
1933 par2+ftp://user@host/dir or par2+s3+http://bucket_name ). This will
1934 create par2 recovery files for each archive and upload them all to the
1935 wrapped backend.
1936 Before restoring, archives will be verified. Corrupt archives will be
1937 repaired on the fly if there are enough recovery blocks available.
1938 Use --par2-redundancy percent to adjust the size (and redundancy) of
1939 recovery files in percent.
1940
1942 PCA is a long-term data archival solution by OVH. It runs a slightly
1943 modified version of Openstack Swift introducing latency in the data
1944 retrieval process. It is a good pick for a multi backend configuration
1945 where receiving volumes while another backend is used to store
1946 manifests and signatures.
1947
1948 The backend requires python-switclient to be installed on the system.
1949 python-keystoneclient is also needed to interact with OpenStack's
1950 Keystone Identity service. See REQUIREMENTS.
1951
1952 It uses following environment variables for authentication:
1953 PCA_USERNAME (required), PCA_PASSWORD (required), PCA_AUTHURL
1954 (required), PCA_USERID (optional), PCA_TENANTID (optional, but either
1955 the tenant name or tenant id must be supplied) PCA_REGIONNAME
1956 (optional), PCA_TENANTNAME (optional, but either the tenant name or
1957 tenant id must be supplied)
1958
1959 If the user was previously authenticated, the following environment
1960 variables can be used instead: PCA_PREAUTHURL (required),
1961 PCA_PREAUTHTOKEN (required)
1962
1963 If PCA_AUTHVERSION is unspecified, it will default to version 2.
1964
1966 The pydrive backend requires Python PyDrive package to be installed on
1967 the system. See REQUIREMENTS.
1968
1969 There are two ways to use PyDrive: with a regular account or with a
1970 "service account". With a service account, a separate account is
1971 created, that is only accessible with Google APIs and not a web login.
1972 With a regular account, you can store backups in your normal Google
1973 Drive.
1974
1975 To use a service account, go to the Google developers console at
1976 https://console.developers.google.com. Create a project, and make sure
1977 Drive API is enabled for the project. Under "APIs and auth", click
1978 Create New Client ID, then select Service Account with P12 key.
1979
1980 Download the .p12 key file of the account and convert it to the .pem
1981 format:
1982 openssl pkcs12 -in XXX.p12 -nodes -nocerts > pydriveprivatekey.pem
1983
1984 The content of .pem file should be passed to GOOGLE_DRIVE_ACCOUNT_KEY
1985 environment variable for authentication.
1986
1987 The email address of the account will be used as part of URL. See URL
1988 FORMAT above.
1989
1990 The alternative is to use a regular account. To do this, start as
1991 above, but when creating a new Client ID, select "Installed
1992 application" of type "Other". Create a file with the following content,
1993 and pass its filename in the GOOGLE_DRIVE_SETTINGS environment
1994 variable:
1995 client_config_backend: settings
1996 client_config:
1997 client_id: <Client ID from developers' console>
1998 client_secret: <Client secret from developers' console>
1999 save_credentials: True
2000 save_credentials_backend: file
2001 save_credentials_file: <filename to cache credentials>
2002 get_refresh_token: True
2003
2004 In this scenario, the username and host parts of the URL play no role;
2005 only the path matters. During the first run, you will be prompted to
2006 visit an URL in your browser to grant access to your drive. Once
2007 granted, you will receive a verification code to paste back into
2008 Duplicity. The credentials are then cached in the file references above
2009 for future use.
2010
2012 Rclone is a powerful command line program to sync files and directories
2013 to and from various cloud storage providers.
2014
2015 Usage
2016 Once you have configured an rclone remote via
2017
2018 rclone config
2019
2020 and successfully set up a remote (e.g. gdrive for Google Drive),
2021 assuming you can list your remote files with
2022
2023 rclone ls gdrive:mydocuments
2024
2025 you can start your backup with
2026
2027 duplicity /mydocuments rclone://gdrive:/mydocuments
2028
2029 Please note the slash after the second colon. Some storage provider
2030 will work with or without slash after colon, but some other will not.
2031 Since duplicity will complain about malformed URL if a slash is not
2032 present, always put it after the colon, and the backend will handle it
2033 for you.
2034
2035 Options
2036 Note that all rclone options can be set by env vars as well. This is
2037 properly documented here
2038
2039 https://rclone.org/docs/
2040
2041 but in a nutshell you need to take the long option name, strip the
2042 leading --, change - to _, make upper case and prepend RCLONE_. for
2043 example
2044
2045 the equivalent of '--stats 5s' would be the env var
2046 RCLONE_STATS=5s
2047
2049 Three environment variables are used with the slate backend:
2050 1. `SLATE_API_KEY` - Your slate API key
2051 2. `SLATE_SSL_VERIFY` - either '1'(True) or '0'(False) for ssl
2052 verification (optional - True by default)
2053 3. `PASSPHRASE` - your gpg passhprase for encryption (optional -
2054 will be prompted if not set or not used at all if using the `--no-
2055 encryption` parameter)
2056
2057 To use the slate backend, use the following scheme:
2058 slate://[slate-id]
2059
2060 e.g. Full backup of current directory to slate:
2061 duplicity full . "slate://6920df43-5c3w-2x7i-69aw-2390567uav75"
2062
2063 Here's a demo:
2064 https://gitlab.com/Shr1ftyy/duplicity/uploads/675664ef0eb431d14c8e20045e3fafb6/slate_demo.mp4
2065
2067 The ssh backends support sftp and scp/ssh transport protocols. This is
2068 a known user-confusing issue as these are fundamentally different. If
2069 you plan to access your backend via one of those please inform yourself
2070 about the requirements for a server to support sftp or scp/ssh access.
2071 To make it even more confusing the user can choose between several ssh
2072 backends via a scheme prefix: paramiko+ (default), pexpect+, lftp+... .
2073 paramiko & pexpect support --use-scp, --ssh-askpass and --ssh-options.
2074 Only the pexpect backend allows one to define --scp-command and --sftp-
2075 command.
2076 SSH paramiko backend (default) is a complete reimplementation of ssh
2077 protocols natively in python. Advantages are speed and maintainability.
2078 Minor disadvantage is that extra packages are needed as listed in
2079 REQUIREMENTS. In sftp (default) mode all operations are done via the
2080 according sftp commands. In scp mode ( --use-scp ) though scp access is
2081 used for put/get operations but listing is done via ssh remote shell.
2082 SSH pexpect backend is the legacy ssh backend using the command line
2083 ssh binaries via pexpect. Older versions used scp for get and put
2084 operations and sftp for list and delete operations. The current
2085 version uses sftp for all four supported operations, unless the --use-
2086 scp option is used to revert to old behavior.
2087 SSH lftp backend is simply there because lftp can interact with the ssh
2088 cmd line binaries. It is meant as a last resort in case the above
2089 options fail for some reason.
2090
2091 Why use sftp instead of scp?
2092 The change to sftp was made in order to allow the remote system to
2093 chroot the backup, thus providing better security and because it does
2094 not suffer from shell quoting issues like scp. Scp also does not
2095 support any kind of file listing, so sftp or ssh access will always be
2096 needed in addition for this backend mode to work properly. Sftp does
2097 not have these limitations but needs an sftp service running on the
2098 backend server, which is sometimes not an option.
2099
2101 Certificate verification as implemented right now [02.2016] only in the
2102 webdav and lftp backends. older pythons 2.7.8- and older lftp binaries
2103 need a file based database of certification authority certificates
2104 (cacert file).
2105 Newer python 2.7.9+ and recent lftp versions however support the system
2106 default certificates (usually in /etc/ssl/certs) and also giving an
2107 alternative ca cert folder via --ssl-cacert-path.
2108 The cacert file has to be a PEM formatted text file as currently
2109 provided by the CURL project. See
2110 http://curl.haxx.se/docs/caextract.html
2111 After creating/retrieving a valid cacert file you should copy it to
2112 either
2113 ~/.duplicity/cacert.pem
2114 ~/duplicity_cacert.pem
2115 /etc/duplicity/cacert.pem
2116 Duplicity searches it there in the same order and will fail if it can't
2117 find it. You can however specify the option --ssl-cacert-file <file>
2118 to point duplicity to a copy in a different location.
2119 Finally there is the --ssl-no-check-certificate option to disable
2120 certificate verification altogether, in case some ssl library is
2121 missing or verification is not wanted. Use it with care, as even with
2122 self signed servers manually providing the private ca certificate is
2123 definitely the safer option.
2124
2126 Swift is the OpenStack Object Storage service.
2127 The backend requires python-switclient to be installed on the system.
2128 python-keystoneclient is also needed to use OpenStack's Keystone
2129 Identity service. See REQUIREMENTS.
2130
2131 It uses following environment variables for authentication:
2132
2133 SWIFT_USERNAME (required),
2134 SWIFT_PASSWORD (required),
2135 SWIFT_AUTHURL (required),
2136 SWIFT_TENANTID or SWIFT_TENANTNAME (required with
2137 SWIFT_AUTHVERSION=2, can alternatively be defined in
2138 SWIFT_USERNAME like e.g. SWIFT_USERNAME="tenantname:user"),
2139 SWIFT_PROJECT_ID or SWIFT_PROJECT_NAME (required with
2140 SWIFT_AUTHVERSION=3),
2141 SWIFT_USERID (optional, required only for IBM Bluemix
2142 ObjectStorage),
2143 SWIFT_REGIONNAME (optional).
2144
2145 If the user was previously authenticated, the following environment
2146 variables can be used instead: SWIFT_PREAUTHURL (required),
2147 SWIFT_PREAUTHTOKEN (required)
2148
2149 If SWIFT_AUTHVERSION is unspecified, it will default to version 1.
2150
2152 Signing and symmetrically encrypt at the same time with the gpg binary
2153 on the command line, as used within duplicity, is a specifically
2154 challenging issue. Tests showed that the following combinations proved
2155 working.
2156 1. Setup gpg-agent properly. Use the option --use-agent and enter both
2157 passphrases (symmetric and sign key) in the gpg-agent's dialog.
2158 2. Use a PASSPHRASE for symmetric encryption of your choice but the
2159 signing key has an empty passphrase.
2160 3. The used PASSPHRASE for symmetric encryption and the passphrase of
2161 the signing key are identical.
2162
2164 This backend uses the xorriso tool to append backups to optical media
2165 or ISO9660 images.
2166
2167 Use the following environment variables for more settings:
2168 XORRISO_PATH, set an alternative path to the xorriso executable
2169 XORRISO_WRITE_SPEED, specify the speed for writing to the
2170 optical disc. One of [min, max]
2171 XORRISO_ASSERT_VOLID, specify the required volume ID of the ISO.
2172 Aborts when the actual volume ID is different.
2173 XORRISO_ARGS, for expert use only. Pass arbitrary arguments to
2174 xorriso. Example: XORRISO_ARGS='-md5 all'
2175
2177 Hard links currently unsupported (they will be treated as non-linked
2178 regular files).
2179
2180 Bad signatures will be treated as empty instead of logging appropriate
2181 error message.
2182
2184 This section describes duplicity's basic operation and the format of
2185 its data files. It should not necessary to read this section to use
2186 duplicity.
2187
2188 The files used by duplicity to store backup data are tarfiles in GNU
2189 tar format. For incremental backups, new files are saved normally in
2190 the tarfile. But when a file changes, instead of storing a complete
2191 copy of the file, only a diff is stored, as generated by rdiff(1). If
2192 a file is deleted, a 0 length file is stored in the tar. It is
2193 possible to restore a duplicity archive "manually" by using tar and
2194 then cp, rdiff, and rm as necessary. These duplicity archives have the
2195 extension difftar.
2196
2197 Both full and incremental backup sets have the same format. In effect,
2198 a full backup set is an incremental one generated from an empty
2199 signature (see below). The files in full backup sets will start with
2200 duplicity-full while the incremental sets start with duplicity-inc.
2201 When restoring, duplicity applies patches in order, so deleting, for
2202 instance, a full backup set may make related incremental backup sets
2203 unusable.
2204
2205 In order to determine which files have been deleted, and to calculate
2206 diffs for changed files, duplicity needs to process information about
2207 previous sessions. It stores this information in the form of tarfiles
2208 where each entry's data contains the signature (as produced by rdiff)
2209 of the file instead of the file's contents. These signature sets have
2210 the extension sigtar.
2211
2212 Signature files are not required to restore a backup set, but without
2213 an up-to-date signature, duplicity cannot append an incremental backup
2214 to an existing archive.
2215
2216 To save bandwidth, duplicity generates full signature sets and
2217 incremental signature sets. A full signature set is generated for each
2218 full backup, and an incremental one for each incremental backup. These
2219 start with duplicity-full-signatures and duplicity-new-signatures
2220 respectively. These signatures will be stored both locally and
2221 remotely. The remote signatures will be encrypted if encryption is
2222 enabled. The local signatures will not be encrypted and stored in the
2223 archive dir (see --archive-dir ).
2224
2226 Duplicity requires a POSIX-like operating system with a python
2227 interpreter version 3.8+ installed. It is best used under GNU/Linux.
2228
2229 Some backends also require additional components (probably available as
2230 packages for your specific platform):
2231
2232 Amazon Drive backend
2233 python-requests - http://python-requests.org
2234 python-requests-oauthlib - https://github.com/requests/requests-
2235 oauthlib
2236
2237 azure backend (Azure Storage Blob Service)
2238 Microsoft Azure Storage Blobs client library for Python -
2239 https://pypi.org/project/azure-storage-blob/
2240
2241 boto3 backend (S3 Amazon Web Services, Google Cloud Storage) (default)
2242 boto3 version 1.x - https://github.com/boto/boto3
2243
2244 box backend (box.com)
2245 boxsdk - https://github.com/box/box-python-sdk
2246
2247 cfpyrax backend (Rackspace Cloud) and hubic backend (hubic.com)
2248 Rackspace CloudFiles Pyrax API -
2249 http://docs.rackspace.com/sdks/guide/content/python.html
2250
2251 dpbx backend (Dropbox)
2252 Dropbox Python SDK -
2253 https://www.dropbox.com/developers/reference/sdk
2254
2255 gdocs gdata backend (legacy)
2256 Google Data APIs Python Client Library -
2257 http://code.google.com/p/gdata-python-client/
2258
2259 gdocs pydrive backend(default)
2260 see pydrive backend
2261
2262 gio backend (Gnome VFS API)
2263 PyGObject - http://live.gnome.org/PyGObject
2264 D-Bus (dbus)- http://www.freedesktop.org/wiki/Software/dbus
2265
2266 lftp backend (needed for ftp, ftps, fish [over ssh] - also supports
2267 sftp, webdav[s])
2268 LFTP Client - http://lftp.yar.ru/
2269
2270 MEGA backend (only works for accounts created prior to November 2018)
2271 (mega.nz)
2272 megatools client - https://github.com/megous/megatools
2273
2274 MEGA v2 and v3 backend (works for all MEGA accounts) (mega.nz)
2275 MEGAcmd client - https://mega.nz/cmd
2276
2277 multi backend
2278 Multi -- store to more than one backend
2279 (also see A NOTE ON MULTI BACKEND ) below.
2280
2281 ncftp backend (ftp, select via ncftp+ftp://)
2282 NcFTP - http://www.ncftp.com/
2283
2284 OneDrive backend (Microsoft OneDrive)
2285 python-requests-oauthlib - https://github.com/requests/requests-
2286 oauthlib
2287
2288 Par2 Wrapper Backend
2289 par2cmdline - http://parchive.sourceforge.net/
2290
2291 pydrive backend
2292 PyDrive -- a wrapper library of google-api-python-client -
2293 https://pypi.python.org/pypi/PyDrive
2294 (also see A NOTE ON PYDRIVE BACKEND ) below.
2295
2296 rclone backend
2297 rclone - https://rclone.org/
2298
2299 rsync backend
2300 rsync client binary - http://rsync.samba.org/
2301
2302 ssh paramiko backend (default)
2303 paramiko (SSH2 for python) -
2304 http://pypi.python.org/pypi/paramiko (downloads);
2305 http://github.com/paramiko/paramiko (project page)
2306 pycrypto (Python Cryptography Toolkit) -
2307 http://www.dlitz.net/software/pycrypto/
2308
2309 ssh pexpect backend(legacy)
2310 sftp/scp client binaries OpenSSH - http://www.openssh.com/
2311 Python pexpect module -
2312 http://pexpect.sourceforge.net/pexpect.html
2313
2314 swift backend (OpenStack Object Storage)
2315 Python swiftclient module - https://github.com/openstack/python-
2316 swiftclient/
2317 Python keystoneclient module -
2318 https://github.com/openstack/python-keystoneclient/
2319
2320 webdav backend
2321 certificate authority database file for ssl certificate
2322 verification of HTTPS connections -
2323 http://curl.haxx.se/docs/caextract.html
2324 (also see A NOTE ON SSL CERTIFICATE VERIFICATION).
2325 Python kerberos module for kerberos authentication -
2326 https://github.com/02strich/pykerberos
2327
2328 MediaFire backend
2329 MediaFire Python Open SDK -
2330 https://pypi.python.org/pypi/mediafire/
2331
2332 xorriso backend
2333 xorriso - https://www.gnu.org/software/xorriso/
2334
2336 Original Author - Ben Escoto <bescoto@stanford.edu>
2337
2338 Current Maintainer - Kenneth Loafman <kenneth@loafman.com>
2339
2340 Continuous Contributors
2341 Edgar Soldin, Mike Terry
2342 Most backends were contributed individually. Information about their
2343 authorship may be found in the according file's header.
2344 Also we'd like to thank everybody posting issues to the mailing list or
2345 on launchpad, sending in patches or contributing otherwise. Duplicity
2346 wouldn't be as stable and useful if it weren't for you.
2347 A special thanks goes to rsync.net, a Cloud Storage provider with
2348 explicit support for duplicity, for several monetary donations and for
2349 providing a special "duplicity friends" rate for their offsite backup
2350 service. Email info@rsync.net for details.
2351
2353 python(1), rdiff(1), rdiff-backup(1).
2354
2355
2356
2357Version 2.1.4 October 20, 2023 DUPLICITY(1)