1DUPLICITY(1) User Manuals DUPLICITY(1)
2
3
4
6 duplicity - Encrypted incremental backup to local or remote storage.
7
8
10 For detailed descriptions for each command see chapter ACTIONS.
11
12 duplicity [full|incremental] [options] source_directory target_url
13
14 duplicity verify [options] [--compare-data] [--file-to-restore
15 <relpath>] [--time time] source_url target_directory
16
17 duplicity collection-status [options] [--file-changed <relpath>]
18 [--show-changes-in-set <index>] target_url
19
20 duplicity list-current-files [options] [--time time] target_url
21
22 duplicity [restore] [options] [--file-to-restore <relpath>] [--time
23 time] source_url target_directory
24
25 duplicity remove-older-than <time> [options] [--force] target_url
26
27 duplicity remove-all-but-n-full <count> [options] [--force] target_url
28
29 duplicity remove-all-inc-of-but-n-full <count> [options] [--force]
30 target_url
31
32 duplicity cleanup [options] [--force] target_url
33
34 duplicity replicate [options] [--time time] source_url target_url
35
36
38 Duplicity incrementally backs up files and folders into tar-format
39 volumes encrypted with GnuPG and places them to a remote (or local)
40 storage backend. See chapter URL FORMAT for a list of all supported
41 backends and how to address them. Because duplicity uses librsync,
42 incremental backups are space efficient and only record the parts of
43 files that have changed since the last backup. Currently duplicity
44 supports deleted files, full Unix permissions, uid/gid, directories,
45 symbolic links, fifos, etc., but not hard links.
46
47 If you are backing up the root directory /, remember to --exclude
48 /proc, or else duplicity will probably crash on the weird stuff in
49 there.
50
51
53 Here is an example of a backup, using sftp to back up /home/me to
54 some_dir on the other.host machine:
55
56 duplicity /home/me sftp://uid@other.host/some_dir
57
58 If the above is run repeatedly, the first will be a full backup, and
59 subsequent ones will be incremental. To force a full backup, use the
60 full action:
61
62 duplicity full /home/me sftp://uid@other.host/some_dir
63
64 or enforcing a full every other time via --full-if-older-than <time> ,
65 e.g. a full every month:
66
67 duplicity --full-if-older-than 1M /home/me
68 sftp://uid@other.host/some_dir
69
70 Now suppose we accidentally delete /home/me and want to restore it the
71 way it was at the time of last backup:
72
73 duplicity sftp://uid@other.host/some_dir /home/me
74
75 Duplicity enters restore mode because the URL comes before the local
76 directory. If we wanted to restore just the file "Mail/article" in
77 /home/me as it was three days ago into /home/me/restored_file:
78
79 duplicity -t 3D --file-to-restore Mail/article
80 sftp://uid@other.host/some_dir /home/me/restored_file
81
82 The following command compares the latest backup with the current
83 files:
84
85 duplicity verify sftp://uid@other.host/some_dir /home/me
86
87 Finally, duplicity recognizes several include/exclude options. For
88 instance, the following will backup the root directory, but exclude
89 /mnt, /tmp, and /proc:
90
91 duplicity --exclude /mnt --exclude /tmp --exclude /proc /
92 file:///usr/local/backup
93
94 Note that in this case the destination is the local directory
95 /usr/local/backup. The following will backup only the /home and /etc
96 directories under root:
97
98 duplicity --include /home --include /etc --exclude '**' /
99 file:///usr/local/backup
100
101 Duplicity can also access a repository via ftp. If a user name is
102 given, the environment variable FTP_PASSWORD is read to determine the
103 password:
104
105 FTP_PASSWORD=mypassword duplicity /local/dir
106 ftp://user@other.host/some_dir
107
108
110 Duplicity knows action commands, which can be finetuned with options.
111 The actions for backup (full,incr) and restoration (restore) can as
112 well be left out as duplicity detects in what mode it should switch to
113 by the order of target URL and local folder. If the target URL comes
114 before the local folder a restore is in order, is the local folder
115 before target URL then this folder is about to be backed up to the
116 target URL.
117 If a backup is in order and old signatures can be found duplicity
118 automatically performs an incremental backup.
119
120 NOTE: The following explanations explain some but not all options that
121 can be used in connection with that action command. Consult the
122 OPTIONS section for more detailed informations.
123
124
125 full <folder> <url>
126 Perform a full backup. A new backup chain is started even if
127 signatures are available for an incremental backup.
128
129
130 incr <folder> <url>
131 If this is requested an incremental backup will be performed.
132 Duplicity will abort if no old signatures can be found.
133
134
135 verify [--compare-data] [--time <time>] [--file-to-restore <rel_path>]
136 <url> <local_path>
137 Verify tests the integrity of the backup archives at the remote
138 location by downloading each file and checking both that it can
139 restore the archive and that the restored file matches the
140 signature of that file stored in the backup, i.e. compares the
141 archived file with its hash value from archival time. Verify
142 does not actually restore and will not overwrite any local
143 files. Duplicity will exit with a non-zero error level if any
144 files do not match the signature stored in the archive for that
145 file. On verbosity level 4 or higher, it will log a message for
146 each file that differs from the stored signature. Files must be
147 downloaded to the local machine in order to compare them.
148 Verify does not compare the backed-up version of the file to the
149 current local copy of the files unless the --compare-data option
150 is used (see below).
151 The --file-to-restore option restricts verify to that file or
152 folder. The --time option allows to select a backup to verify.
153 The --compare-data option enables data comparison (see below).
154
155
156 collection-status [--file-changed <relpath>] [--show-changes-in-set
157 <index>] <url>
158 Summarize the status of the backup repository by printing the
159 chains and sets found, and the number of volumes in each.
160 The --file-changed option summarizes the changes to the file (in
161 the most recent backup chain). The --show-changes-in-set option
162 summarizes all the file changes in the index:th backup set
163 (where index 0 means the latest set, 1 means the next to latest,
164 etc.).
165
166
167 list-current-files [--time <time>] <url>
168 Lists the files contained in the most current backup or backup
169 at time. The information will be extracted from the signature
170 files, not the archive data itself. Thus the whole archive does
171 not have to be downloaded, but on the other hand if the archive
172 has been deleted or corrupted, this command will not detect it.
173
174
175 restore [--file-to-restore <relpath>] [--time <time>] <url>
176 <target_folder>
177 You can restore the full monty or selected folders/files from a
178 specific time. Use the relative path as it is printed by list-
179 current-files. Usually not needed as duplicity enters restore
180 mode when it detects that the URL comes before the local folder.
181
182
183 remove-older-than <time> [--force] <url>
184 Delete all backup sets older than the given time. Old backup
185 sets will not be deleted if backup sets newer than time depend
186 on them. See the TIME FORMATS section for more information.
187 Note, this action cannot be combined with backup or other
188 actions, such as cleanup. Note also that --force will be needed
189 to delete the files instead of just listing them.
190
191
192 remove-all-but-n-full <count> [--force] <url>
193 Delete all backups sets that are older than the count:th last
194 full backup (in other words, keep the last count full backups
195 and associated incremental sets). count must be larger than
196 zero. A value of 1 means that only the single most recent backup
197 chain will be kept. Note that --force will be needed to delete
198 the files instead of just listing them.
199
200
201 remove-all-inc-of-but-n-full <count> [--force] <url>
202 Delete incremental sets of all backups sets that are older than
203 the count:th last full backup (in other words, keep only old
204 full backups and not their increments). count must be larger
205 than zero. A value of 1 means that only the single most recent
206 backup chain will be kept intact. Note that --force will be
207 needed to delete the files instead of just listing them.
208
209
210 cleanup [--force] <url>
211 Delete the extraneous duplicity files on the given backend.
212 Non-duplicity files, or files in complete data sets will not be
213 deleted. This should only be necessary after a duplicity
214 session fails or is aborted prematurely. Note that --force will
215 be needed to delete the files instead of just listing them.
216
217
218 replicate [--time time] <source_url> <target_url>
219 Replicate backup sets from source to target backend. Files will
220 be (re)-encrypted and (re)-compressed depending on normal
221 backend options. Signatures and volumes will not get recomputed,
222 thus options like --volsize or --max-blocksize have no effect.
223 When --time time is given, only backup sets older than time will
224 be replicated.
225
226
228 --allow-source-mismatch
229 Do not abort on attempts to use the same archive dir or remote
230 backend to back up different directories. duplicity will tell
231 you if you need this switch.
232
233
234 --archive-dir path
235 The archive directory.
236
237 NOTE: This option changed in 0.6.0. The archive directory is
238 now necessary in order to manage persistence for current and
239 future enhancements. As such, this option is now used only to
240 change the location of the archive directory. The archive
241 directory should not be deleted, or duplicity will have to
242 recreate it from the remote repository (which may require
243 decrypting the backup contents).
244
245 When backing up or restoring, this option specifies that the
246 local archive directory is to be created in path. If the
247 archive directory is not specified, the default will be to
248 create the archive directory in ~/.cache/duplicity/.
249
250 The archive directory can be shared between backups to multiple
251 targets, because a subdirectory of the archive dir is used for
252 individual backups (see --name ).
253
254 The combination of archive directory and backup name must be
255 unique in order to separate the data of different backups.
256
257 The interaction between the --archive-dir and the --name options
258 allows for four possible combinations for the location of the
259 archive dir:
260
261
262 1. neither specified (default)
263 ~/.cache/duplicity/hash-of-url
264
265 2. --archive-dir=/arch, no --name
266 /arch/hash-of-url
267
268 3. no --archive-dir, --name=foo
269 ~/.cache/duplicity/foo
270
271 4. --archive-dir=/arch, --name=foo
272 /arch/foo
273
274
275 --asynchronous-upload
276 (EXPERIMENTAL) Perform file uploads asynchronously in the
277 background, with respect to volume creation. This means that
278 duplicity can upload a volume while, at the same time, preparing
279 the next volume for upload. The intended end-result is a faster
280 backup, because the local CPU and your bandwidth can be more
281 consistently utilized. Use of this option implies additional
282 need for disk space in the temporary storage location; rather
283 than needing to store only one volume at a time, enough storage
284 space is required to store two volumes.
285
286
287 --azure-blob-tier
288 Standard storage tier used for backup files (Hot|Cool|Archive).
289
290
291 --azure-max-single-put-size
292 Specify the number of the largest supported upload size where
293 the Azure library makes only one put call. If the content size
294 is known and below this value the Azure library will only
295 perform one put request to upload one block. The number is
296 expected to be in bytes.
297
298
299 --azure-max-block-size
300 Specify the number for the block size used by the Azure library
301 to upload blobs if it is split into multiple blocks. The
302 maximum block size the service supports is 104857600 (100MiB)
303 and the default is 4194304 (4MiB)
304
305
306 --azure-max-connections
307 Specify the number of maximum connections to transfer one blob
308 to Azure blob size exceeds 64MB. The default values is 2.
309
310
311 --backend-retry-delay number
312 Specifies the number of seconds that duplicity waits after an
313 error has occured before attempting to repeat the operation.
314
315
316 --cf-backend backend
317 Allows the explicit selection of a cloudfiles backend. Defaults
318 to pyrax. Alternatively you might choose cloudfiles.
319
320
321 --b2-hide-files
322 Causes Duplicity to hide files in B2 instead of deleting them.
323 Useful in combination with B2's lifecycle rules.
324
325
326 --compare-data
327 Enable data comparison of regular files on action verify. This
328 conducts a verify as described above to verify the integrity of
329 the backup archives, but additionally compares restored files to
330 those in target_directory. Duplicity will not replace any files
331 in target_directory. Duplicity will exit with a non-zero error
332 level if the files do not correctly verify or if any files from
333 the archive differ from those in target_directory. On verbosity
334 level 4 or higher, it will log a message for each file that
335 differs from its equivalent in target_directory.
336
337
338 --copy-links
339 Resolve symlinks during backup. Enabling this will resolve &
340 back up the symlink's file/folder data instead of the symlink
341 itself, potentially increasing the size of the backup.
342
343
344 --dry-run
345 Calculate what would be done, but do not perform any backend
346 actions
347
348
349 --encrypt-key key-id
350 When backing up, encrypt to the given public key, instead of
351 using symmetric (traditional) encryption. Can be specified
352 multiple times. The key-id can be given in any of the formats
353 supported by GnuPG; see gpg(1), section "HOW TO SPECIFY A USER
354 ID" for details.
355
356
357 --encrypt-secret-keyring filename
358 This option can only be used with --encrypt-key, and changes the
359 path to the secret keyring for the encrypt key to filename This
360 keyring is not used when creating a backup. If not specified,
361 the default secret keyring is used which is usually located at
362 .gnupg/secring.gpg
363
364
365 --encrypt-sign-key key-id
366 Convenience parameter. Same as --encrypt-key key-id --sign-key
367 key-id.
368
369
370 --exclude shell_pattern
371 Exclude the file or files matched by shell_pattern. If a
372 directory is matched, then files under that directory will also
373 be matched. See the FILE SELECTION section for more
374 information.
375
376
377 --exclude-device-files
378 Exclude all device files. This can be useful for
379 security/permissions reasons or if duplicity is not handling
380 device files correctly.
381
382
383 --exclude-filelist filename
384 Excludes the files listed in filename, with each line of the
385 filelist interpreted according to the same rules as --include
386 and --exclude. See the FILE SELECTION section for more
387 information.
388
389
390 --exclude-if-present filename
391 Exclude directories if filename is present. Allows the user to
392 specify folders that they do not wish to backup by adding a
393 specified file (e.g. ".nobackup") instead of maintaining a
394 comprehensive exclude/include list.
395
396
397 --exclude-older-than time
398 Exclude any files whose modification date is earlier than the
399 specified time. This can be used to produce a partial backup
400 that contains only recently changed files. See the TIME FORMATS
401 section for more information.
402
403
404 --exclude-other-filesystems
405 Exclude files on file systems (identified by device number)
406 other than the file system the root of the source directory is
407 on.
408
409
410 --exclude-regexp regexp
411 Exclude files matching the given regexp. Unlike the --exclude
412 option, this option does not match files in a directory it
413 matches. See the FILE SELECTION section for more information.
414
415
416 --files-from filename
417 Read a list of files to backup from filename rather than
418 searching the entire backup source directory. Operation is
419 otherwise normal, just on the specified subset of the backup
420 source directory.
421
422 Files must be specified one per line and relative to the backup
423 source directory. Any absolute paths will raise an error. All
424 characters per line are significant and treated as part of the
425 path, including leading and trailing whitespace. Lines are
426 separated by newlines or nulls, depending on whether the --null-
427 separator switch was given.
428
429 It is not necessary to include the parent directory of listed
430 files, their inclusion is implied. However, the content of any
431 explicitly listed directories is not implied. All required files
432 must be listed when this option is used.
433
434
435 --file-prefix prefix
436 --file-prefix-manifest prefix
437 --file-prefix-archive prefix
438 --file-prefix-signature prefix
439 Adds a prefix to either all files or only manifest, archive,
440 signature files.
441
442 The same set of prefixes must be passed in on backup and
443 restore.
444
445 If both global and type-specific prefixes are set, global prefix
446 will go before type-specific prefixes.
447
448 See also A NOTE ON FILENAME PREFIXES
449
450 --file-to-restore path
451 This option may be given in restore mode, causing only path to
452 be restored instead of the entire contents of the backup
453 archive. path should be given relative to the root of the
454 directory backed up.
455
456 --filter-globbing
457 --filter-ignorecase
458 --filter-literal
459 --filter-regexp
460 --filter-strictcase
461 Change the interpretation of patterns passed to the file
462 selection condition option arguments --exclude and --include
463 (and variations thereof, including file lists). These options
464 can appear multiple times to switch between shell globbing
465 (default), literal strings, and regular expressions, case
466 sensitive (default) or not. The specified interpretation applies
467 for all subsequent selection conditions up until the next
468 --filter option.
469
470 See the FILE SELECTION section for more information.
471
472 --full-if-older-than time
473 Perform a full backup if an incremental backup is requested, but
474 the latest full backup in the collection is older than the given
475 time. See the TIME FORMATS section for more information.
476
477 --force
478 Proceed even if data loss might result. Duplicity will let the
479 user know when this option is required.
480
481 --ftp-passive
482 Use passive (PASV) data connections. The default is to use
483 passive, but to fallback to regular if the passive connection
484 fails or times out.
485
486 --ftp-regular
487 Use regular (PORT) data connections.
488
489 --gio Use the GIO backend and interpret any URLs as GIO would.
490
491 --hidden-encrypt-key key-id
492 Same as --encrypt-key, but it hides user's key id from encrypted
493 file. It uses the gpg's --hidden-recipient command to obfuscate
494 the owner of the backup. On restore, gpg will automatically try
495 all available secret keys in order to decrypt the backup. See
496 gpg(1) for more details.
497
498 --ignore-errors
499 Try to ignore certain errors if they happen. This option is only
500 intended to allow the restoration of a backup in the face of
501 certain problems that would otherwise cause the backup to fail.
502 It is not ever recommended to use this option unless you have a
503 situation where you are trying to restore from backup and it is
504 failing because of an issue which you want duplicity to ignore.
505 Even then, depending on the issue, this option may not have an
506 effect.
507
508 Please note that while ignored errors will be logged, there will
509 be no summary at the end of the operation to tell you what was
510 ignored, if anything. If this is used for emergency restoration
511 of data, it is recommended that you run the backup in such a way
512 that you can revisit the backup log (look for lines containing
513 the string IGNORED_ERROR).
514
515 If you ever have to use this option for reasons that are not
516 understood or understood but not your own responsibility, please
517 contact duplicity maintainers. The need to use this option under
518 production circumstances would normally be considered a bug.
519
520 --imap-full-address email_address
521 The full email address of the user name when logging into an
522 imap server. If not supplied just the user name part of the
523 email address is used.
524
525 --imap-mailbox option
526 Allows you to specify a different mailbox. The default is
527 "INBOX". Other languages may require a different mailbox than
528 the default.
529
530 --gpg-binary file_path
531 Allows you to force duplicity to use file_path as gpg command
532 line binary. Can be an absolute or relative file path or a file
533 name. Default value is 'gpg'. The binary will be localized via
534 the PATH environment variable.
535
536 --gpg-options options
537 Allows you to pass options to gpg encryption. The options list
538 should be of the form "--opt1 --opt2=parm" where the string is
539 quoted and the only spaces allowed are between options.
540
541 --include shell_pattern
542 Similar to --exclude but include matched files instead. Unlike
543 --exclude, this option will also match parent directories of
544 matched files (although not necessarily their contents). See
545 the FILE SELECTION section for more information.
546
547 --include-filelist filename
548 Like --exclude-filelist, but include the listed files instead.
549 See the FILE SELECTION section for more information.
550
551 --include-regexp regexp
552 Include files matching the regular expression regexp. Only
553 files explicitly matched by regexp will be included by this
554 option. See the FILE SELECTION section for more information.
555
556 --log-fd number
557 Write specially-formatted versions of output messages to the
558 specified file descriptor. The format used is designed to be
559 easily consumable by other programs.
560
561 --log-file filename
562 Write specially-formatted versions of output messages to the
563 specified file. The format used is designed to be easily
564 consumable by other programs.
565
566 --max-blocksize number
567 determines the number of the blocks examined for changes during
568 the diff process. For files < 1MB the blocksize is a constant
569 of 512. For files over 1MB the size is given by:
570
571 file_blocksize = int((file_len / (2000 * 512)) * 512)
572 return min(file_blocksize, config.max_blocksize)
573
574 where config.max_blocksize defaults to 2048. If you specify a
575 larger max_blocksize, your difftar files will be larger, but
576 your sigtar files will be smaller. If you specify a smaller
577 max_blocksize, the reverse occurs. The --max-blocksize option
578 should be in multiples of 512.
579
580 --name symbolicname
581 Set the symbolic name of the backup being operated on. The
582 intent is to use a separate name for each logically distinct
583 backup. For example, someone may use "home_daily_s3" for the
584 daily backup of a home directory to Amazon S3. The structure of
585 the name is up to the user, it is only important that the names
586 be distinct. The symbolic name is currently only used to affect
587 the expansion of --archive-dir , but may be used for additional
588 features in the future. Users running more than one distinct
589 backup are encouraged to use this option.
590
591 If not specified, the default value is a hash of the backend
592 URL.
593
594 --no-compression
595 Do not use GZip to compress files on remote system.
596
597 --no-encryption
598 Do not use GnuPG to encrypt files on remote system.
599
600 --no-print-statistics
601 By default duplicity will print statistics about the current
602 session after a successful backup. This switch disables that
603 behavior.
604
605 --no-files-changed
606 By default duplicity will collect file names and change action
607 in memory (add, del, chg) during backup. This can be quite
608 expensive in memory use, especially with millions of small
609 files. This flag turns off that collection. This means that
610 the --file-changed option for collection-status will return
611 nothing.
612
613 --null-separator
614 Use nulls (\0) instead of newlines (\n) as line separators,
615 which may help when dealing with filenames containing newlines.
616 This affects the expected format of the files specified by the
617 --{include|exclude}-filelist switches and the --{files-from}
618 option, as well as the format of the directory statistics file.
619
620 --numeric-owner
621 On restore always use the numeric uid/gid from the archive and
622 not the archived user/group names, which is the default
623 behaviour. Recommended for restoring from live cds which might
624 have the users with identical names but different uids/gids.
625
626 --do-not-restore-ownership
627 Ignores the uid/gid from the archive and keeps the current
628 user's one. Recommended for restoring data to mounted
629 filesystem which do not support Unix ownership or when root
630 privileges are not available.
631
632 --num-retries number
633 Number of retries to make on errors before giving up.
634
635 --old-filenames
636 Use the old filename format (incompatible with Windows/Samba)
637 rather than the new filename format.
638
639 --par2-options options
640 Verbatim options to pass to par2.
641
642 --par2-redundancy percent
643 Adjust the level of redundancy in percent for Par2 recovery
644 files (default 10%).
645
646 --par2-volumes number
647 Number of Par2 volumes to create (default 1).
648
649 --progress
650 When selected, duplicity will output the current upload progress
651 and estimated upload time. To annotate changes, it will perform
652 a first dry-run before a full or incremental, and then runs the
653 real operation estimating the real upload progress.
654
655 --progress-rate number
656 Sets the update rate at which duplicity will output the upload
657 progress messages (requires --progress option). Default is to
658 print the status each 3 seconds.
659
660 --rename <original path> <new path>
661 Treats the path orig in the backup as if it were the path new.
662 Can be passed multiple times. An example:
663
664 duplicity restore --rename Documents/metal Music/metal
665 sftp://uid@other.host/some_dir /home/me
666
667 --rsync-options options
668 Allows you to pass options to the rsync backend. The options
669 list should be of the form "opt1=parm1 opt2=parm2" where the
670 option string is quoted and the only spaces allowed are between
671 options. The option string will be passed verbatim to rsync,
672 after any internally generated option designating the remote
673 port to use. Here is a possibly useful example:
674
675 duplicity --rsync-options="--partial-dir=.rsync-partial"
676 /home/me rsync://uid@other.host/some_dir
677
678 --s3-endpoint-url url
679 Specifies the endpoint URL of the S3 storage.
680
681 NOTE: Due to API restrictions the legacy backend boto will use
682 only the values scheme (protocol) and hostname from the given
683 url. Choosing 'http://' will disable SSL encryption, just as if
684 --s3-unencrypted-connection were set.
685
686 --s3-european-buckets
687 When using the Amazon S3 backend, create buckets in Europe
688 instead of the default (requires --s3-use-new-style ). Also see
689 the EUROPEAN S3 BUCKETS section.
690
691 NOTE: This option does not apply when using the boto3 backend,
692 which does not create buckets.
693
694 See also A NOTE ON AMAZON S3 below.
695
696 --s3-multipart-chunk-size
697 Chunk size (in MB, default is 20MB) used for S3 multipart
698 uploads. Adjust this to maximize bandwidth usage. For example, a
699 chunk size of 10MB and a volsize of 100MB would result in 10
700 chunks per volume upload.
701
702 NOTE: This value should optimally be an even multiple of your
703 --volsize for optimal performance.
704
705 See also A NOTE ON AMAZON S3 below.
706
707 --s3-multipart-max-procs
708 Maximum number of concurrent uploads when performing a multipart
709 upload. The default is 4. You can adjust this number to
710 maximizing bandwidth and CPU utilization.
711
712 NOTE: Too many concurrent uploads may have diminishing returns.
713
714 See also A NOTE ON AMAZON S3 below.
715
716 --s3-multipart-max-timeout
717 You can control the maximum time (in seconds) a multipart upload
718 can spend on uploading a single chunk to S3. This may be useful
719 if you find your system hanging on multipart uploads or if you'd
720 like to control the time variance when uploading to S3 to ensure
721 you kill connections to slow S3 endpoints.
722
723 NOTE: This has no effect when using boto3 backend.
724
725 See also A NOTE ON AMAZON S3 below.
726
727 --s3-region-name
728 Specifies the region of the S3 storage. Usually mandatory if the
729 bucket is created in a specific region.
730
731 NOTE: Only in boto3 backend.
732
733 --s3-unencrypted-connection
734 Disable SSL for connections to S3. This may be much faster, at
735 some cost to confidentiality.
736
737 With this option set, anyone between your computer and S3 can
738 observe the traffic and will be able to tell: that you are using
739 Duplicity, the name of the bucket, your AWS Access Key ID, the
740 increment dates and the amount of data in each increment.
741
742 This option affects only the connection, not the GPG encryption
743 of the backup increment files. Unless that is disabled, an
744 observer will not be able to see the file names or contents.
745
746 See also A NOTE ON AMAZON S3 below.
747
748 --s3-use-deep-archive
749 Store volumes using Glacier Deep Archive S3 when uploading to
750 Amazon S3. This storage class has a lower cost of storage but a
751 higher per-request cost along with delays of up to 48 hours from
752 the time of retrieval request. This storage cost is calculated
753 against a 180-day storage minimum. According to Amazon this
754 storage is ideal for data archiving and long-term backup
755 offering 99.999999999% durability. To restore a backup you will
756 have to manually migrate all data stored on AWS Glacier Deep
757 Archive back to Standard S3 and wait for AWS to complete the
758 migration.
759
760 NOTE: Duplicity will store the manifest.gpg files from full and
761 incremental backups on AWS S3 standard storage to allow quick
762 retrieval for later incremental backups, all other data is
763 stored in S3 Glacier Deep Archive.
764
765 --s3-use-glacier
766 Store volumes using Glacier Flexible Storage when uploading to
767 Amazon S3. This storage class has a lower cost of storage but a
768 higher per-request cost along with delays of up to 12 hours from
769 the time of retrieval request. This storage cost is calculated
770 against a 90-day storage minimum. According to Amazon this
771 storage is ideal for data archiving and long-term backup
772 offering 99.999999999% durability. To restore a backup you will
773 have to manually migrate all data stored on AWS Glacier back to
774 Standard S3 and wait for AWS to complete the migration.
775
776 NOTE: Duplicity will store the manifest.gpg files from full and
777 incremental backups on AWS S3 standard storage to allow quick
778 retrieval for later incremental backups, all other data is
779 stored in S3 Glacier.
780
781 --s3-use-glacier-ir
782 Store volumes using Glacier Instant Retrieval when uploading to
783 Amazon S3. This storage class is similar to Glacier Flexible
784 Storage but offers instant retrieval at standard speeds.
785
786 NOTE: Duplicity will store the manifest.gpg files from full and
787 incremental backups on AWS S3 standard storage to allow quick
788 retrieval for later incremental backups, all other data is
789 stored in S3 Glacier.
790
791 --s3-use-ia
792 Store volumes using Standard - Infrequent Access when uploading
793 to Amazon S3. This storage class has a lower storage cost but a
794 higher per-request cost, and the storage cost is calculated
795 against a 30-day storage minimum. According to Amazon, this
796 storage is ideal for long-term file storage, backups, and
797 disaster recovery.
798
799 --s3-use-multiprocessing
800 Allow multipart volumne uploads to S3 through multiprocessing.
801 This option requires Python 2.6 and can be used to make uploads
802 to S3 more efficient. If enabled, files duplicity uploads to S3
803 will be split into chunks and uploaded in parallel. Useful if
804 you want to saturate your bandwidth or if large files are
805 failing during upload.
806
807 NOTE: This has no effect when using the boto3 backend. Boto3
808 always attempts to use multiprocessing.
809
810 See also A NOTE ON AMAZON S3 below.
811
812 --s3-use-new-style
813 When operating on Amazon S3 buckets, use new-style subdomain
814 bucket addressing. This is now the preferred method to access
815 Amazon S3, but is not backwards compatible if your bucket name
816 contains upper-case characters or other characters that are not
817 valid in a hostname.
818
819 NOTE: This option has no effect when using the boto3 backend,
820 which will always use new style subdomain bucket naming.
821
822 See also A NOTE ON AMAZON S3 below.
823
824 --s3-use-onezone-ia
825 Store volumes using One Zone - Infrequent Access when uploading
826 to Amazon S3. This storage is similar to Standard - Infrequent
827 Access, but only stores object data in one Availability Zone.
828
829 --s3-use-rrs
830 Store volumes using Reduced Redundancy Storage when uploading to
831 Amazon S3. This will lower the cost of storage but also lower
832 the durability of stored volumes to 99.99% instead the
833 99.999999999% durability offered by Standard Storage on S3.
834
835 --s3-use-server-side-encryption
836 Allow use of server side encryption in S3
837
838 --s3-use-server-side-kms-encryption
839 --s3-kms-key-id key_id
840 --s3-kms-grant grant
841 Enable server-side encryption using key management service.
842
843 --scp-command command
844 (only ssh pexpect backend with --use-scp enabled) The command
845 will be used instead of "scp" to send or receive files. To list
846 and delete existing files, the sftp command is used.
847 See also A NOTE ON SSH BACKENDS section SSH pexpect backend.
848
849 --sftp-command command
850 (only ssh pexpect backend) The command will be used instead of
851 "sftp".
852 See also A NOTE ON SSH BACKENDS section SSH pexpect backend.
853
854 --short-filenames
855 If this option is specified, the names of the files duplicity
856 writes will be shorter (about 30 chars) but less understandable.
857 This may be useful when backing up to MacOS or another OS or FS
858 that doesn't support long filenames.
859
860 --sign-key key-id
861 This option can be used when backing up, restoring or verifying.
862 When backing up, all backup files will be signed with keyid key.
863 When restoring, duplicity will signal an error if any remote
864 file is not signed with the given key-id. The key-id can be
865 given in any of the formats supported by GnuPG; see gpg(1),
866 section "HOW TO SPECIFY A USER ID" for details. Should be
867 specified only once because currently only one signing key is
868 supported. Last entry overrides all other entries.
869 See also A NOTE ON SYMMETRIC ENCRYPTION AND SIGNING
870
871 --ssh-askpass
872 Tells the ssh backend to prompt the user for the remote system
873 password, if it was not defined in target url and no
874 FTP_PASSWORD env var is set. This password is also used for
875 passphrase-protected ssh keys.
876
877 --ssh-options options
878 Allows you to pass options to the ssh backend. Can be specified
879 multiple times or as a space separated options list. The
880 options list should be of the form "-oOpt1='parm1'
881 -oOpt2='parm2'" where the option string is quoted and the only
882 spaces allowed are between options. The option string will be
883 passed verbatim to both scp and sftp, whose command line syntax
884 differs slightly hence the options should therefore be given in
885 the long option format described in ssh_config(5).
886
887 example of a list:
888
889 duplicity --ssh-options="-oProtocol=2
890 -oIdentityFile='/my/backup/id'" /home/me
891 scp://user@host/some_dir
892
893 example with multiple parameters:
894
895 duplicity --ssh-options="-oProtocol=2" --ssh-
896 options="-oIdentityFile='/my/backup/id'" /home/me
897 scp://user@host/some_dir
898
899 NOTE: The ssh paramiko backend currently supports only the -i or
900 -oIdentityFile or -oUserKnownHostsFile or -oGlobalKnownHostsFile
901 settings. If needed provide more host specific options via
902 ssh_config file.
903
904 --ssl-cacert-file file
905 (only webdav & lftp backend) Provide a cacert file for ssl
906 certificate verification.
907
908 See also A NOTE ON SSL CERTIFICATE VERIFICATION.
909
910 --ssl-cacert-path path/to/certs/
911 (only webdav backend and python 2.7.9+ OR lftp+webdavs and a
912 recent lftp) Provide a path to a folder containing cacert files
913 for ssl certificate verification.
914
915 See also A NOTE ON SSL CERTIFICATE VERIFICATION.
916
917 --ssl-no-check-certificate
918 (only webdav & lftp backend) Disable ssl certificate
919 verification.
920
921 See also A NOTE ON SSL CERTIFICATE VERIFICATION.
922
923 --swift-storage-policy
924 Use this storage policy when operating on Swift containers.
925
926 See also A NOTE ON SWIFT (OPENSTACK OBJECT STORAGE) ACCESS.
927
928 --metadata-sync-mode mode
929 This option defaults to 'partial', but you can set it to 'full'
930
931 Use 'partial' to avoid syncing metadata for backup chains that
932 you are not going to use. This saves time when restoring for
933 the first time, and lets you restore an old backup that was
934 encrypted with a different passphrase by supplying only the
935 target passphrase.
936
937 Use 'full' to sync metadata for all backup chains on the remote.
938
939 --tempdir directory
940 Use this existing directory for duplicity temporary files
941 instead of the system default, which is usually the /tmp
942 directory. This option supersedes any environment variable.
943
944 See also ENVIRONMENT VARIABLES.
945
946 -ttime, --time time, --restore-time time
947 Specify the time from which to restore or list files.
948
949 --time-separator char
950 Use char as the time separator in filenames instead of colon
951 (":").
952
953 --timeout seconds
954 Use seconds as the socket timeout value if duplicity begins to
955 timeout during network operations. The default is 30 seconds.
956
957 --use-agent
958 If this option is specified, then --use-agent is passed to the
959 GnuPG encryption process and it will try to connect to gpg-agent
960 before it asks for a passphrase for --encrypt-key or --sign-key
961 if needed.
962
963 NOTE: Contrary to previous versions of duplicity, this option
964 will also be honored by GnuPG 2 and newer versions. If GnuPG 2
965 is in use, duplicity passes the option --pinentry-mode=loopback
966 to the the gpg process unless --use-agent is specified on the
967 duplicity command line. This has the effect that GnuPG 2 uses
968 the agent only if --use-agent is given, just like GnuPG 1.
969
970 --verbosity level, -vlevel
971 Specify output verbosity level (log level). Named levels and
972 corresponding values are 0 Error, 2 Warning, 4 Notice (default),
973 8 Info, 9 Debug (noisiest).
974 level may also be
975 a character: e, w, n, i, d
976 a word: error, warning, notice, info, debug
977
978 The options -v4, -vn and -vnotice are functionally equivalent,
979 as are the mixed/upper-case versions -vN, -vNotice and -vNOTICE.
980
981 --version
982 Print duplicity's version and quit.
983
984 --volsize number
985 Change the volume size to number MB. Default is 200MB.
986
987 --webdav-headers csv formatted key,value pairs
988 The input format is comma separated list of key,value pairs.
989 Standard CSV encoding may be used.
990
991 For example to set a Cookie use 'Cookie,name=value', or
992 '"Cookie","name=value"'.
993
994 You can set multiple headers, e.g.
995 '"Cookie","name=value","Authorization","xxx"'.
996
998 TMPDIR, TEMP, TMP
999 In decreasing order of importance, specifies the directory to
1000 use for temporary files (inherited from Python's tempfile
1001 module). Eventually the option --tempdir supercedes any of
1002 these.
1003 FTP_PASSWORD
1004 Supported by most backends which are password capable. More
1005 secure than setting it in the backend url (which might be
1006 readable in the operating systems process listing to other users
1007 on the same machine).
1008 PASSPHRASE
1009 This passphrase is passed to GnuPG. If this is not set, the user
1010 will be prompted for the passphrase.
1011 SIGN_PASSPHRASE
1012 The passphrase to be used for --sign-key. If ommitted and sign
1013 key is also one of the keys to encrypt against PASSPHRASE will
1014 be reused instead. Otherwise, if passphrase is needed but not
1015 set the user will be prompted for it.
1016
1017 Other environment variables may be used to configure specific
1018 backends. See the notes for the particular backend.
1019
1021 Duplicity uses the URL format (as standard as possible) to define data
1022 locations. Major difference is that the whole host section is optional
1023 for some backends.
1024 NOTE: If path starts with an extra '/' it usually denotes an absolute
1025 path on the backend.
1026
1027 The generic format for a URL is:
1028
1029 scheme://[[user[:password]@]host[:port]/][/]path
1030
1031 or
1032
1033 scheme://[/]path
1034
1035 It is not recommended to expose the password on the command line since
1036 it could be revealed to anyone with permissions to do process listings,
1037 it is permitted however. Consider setting the environment variable
1038 FTP_PASSWORD instead, which is used by most, if not all backends,
1039 regardless of it's name.
1040
1041 In protocols that support it, the path may be preceded by a single
1042 slash, '/path', to represent a relative path to the target home
1043 directory, or preceded by a double slash, '//path', to represent an
1044 absolute filesystem path.
1045
1046 NOTE: Scheme (protocol) access may be provided by more than one
1047 backend. In case the default backend is buggy or simply not working in
1048 a specific case it might be worth trying an alternative implementation.
1049 Alternative backends can be selected by prefixing the scheme with the
1050 name of the alternative backend e.g. ncftp+ftp:// and are mentioned
1051 below the scheme's syntax summary.
1052
1053 Formats of each of the URL schemes follow:
1054
1055 Amazon Drive Backend
1056 ad://some_dir
1057
1058 See also A NOTE ON AMAZON DRIVE
1059
1060 Azure
1061 azure://container-name
1062
1063 See also A NOTE ON AZURE ACCESS
1064
1065 B2
1066 b2://account_id[:application_key]@bucket_name/[folder/]
1067
1068 Box
1069 box:///some_dir[?config=path_to_config]
1070
1071 See also A NOTE ON BOX ACCESS
1072
1073 Cloud Files (Rackspace)
1074 cf+http://container_name
1075
1076 See also A NOTE ON CLOUD FILES ACCESS
1077
1078 Dropbox
1079 dpbx:///some_dir
1080
1081 Make sure to read A NOTE ON DROPBOX ACCESS first!
1082
1083 File (local file system)
1084 file://[relative|/absolute]/local/path
1085
1086 FISH (Files transferred over Shell protocol) over ssh
1087 fish://user[:password]@other.host[:port]/[relative|/absolute]_path
1088
1089 FTP
1090 ftp[s]://user[:password]@other.host[:port]/some_dir
1091
1092 NOTE: use lftp+, ncftp+ prefixes to enforce a specific backend,
1093 default is lftp+ftp://...
1094
1095 Google Cloud Storage (GCS via Interoperable Access)
1096 s3://bucket[/path]
1097
1098 NOTE: use boto+gs://bucket[/path] or boto+s3://bucket[/path] to
1099 use legacy boto backend. default is boto3+s3://
1100
1101 See A NOTE ON GOOGLE CLOUD STORAGE about needed endpoint option
1102 and env vars for authentification.
1103
1104 Google Docs
1105 gdocs://user[:password]@other.host/some_dir
1106
1107 NOTE: use pydrive+, gdata+ prefixes to enforce a specific
1108 backend, default is pydrive+gdocs://...
1109
1110 Google Drive
1111
1112 gdrive://<service account' email
1113 address>@developer.gserviceaccount.com/some_dir
1114
1115 See also A NOTE ON GDRIVE BACKEND below.
1116
1117 HSI
1118 hsi://user[:password]@other.host/some_dir
1119
1120 hubiC
1121 cf+hubic://container_name
1122
1123 See also A NOTE ON HUBIC
1124
1125 IMAP email storage
1126 imap[s]://user[:password]@host.com[/from_address_prefix]
1127
1128 See also A NOTE ON IMAP
1129
1130 MediaFire
1131 mf://user[:password]@mediafire.com/some_dir
1132
1133 See also A NOTE ON MEDIAFIRE BACKEND below.
1134
1135 MEGA.nz cloud storage (only works for accounts created prior to
1136 November 2018, uses "megatools")
1137 mega://user[:password]@mega.nz/some_dir
1138
1139 NOTE: if not given in the URL, relies on password being stored
1140 within $HOME/.megarc (as used by the "megatools" utilities)
1141
1142 MEGA.nz cloud storage (works for all MEGA accounts, uses "MEGAcmd"
1143 tools)
1144 megav2://user[:password]@mega.nz/some_dir
1145 megav3://user[:password]@mega.nz/some_dir[?no_logout=1] (For
1146 latest MEGAcmd)
1147
1148 NOTE: despite "MEGAcmd" no longer uses a configuration file, for
1149 convenience storing the user password this backend searches it
1150 in the $HOME/.megav2rc file (same syntax as the old
1151 $HOME/.megarc)
1152 [Login]
1153 Username = MEGA_USERNAME
1154 Password = MEGA_PASSWORD
1155
1156 multi
1157 multi:///path/to/config.json
1158
1159 See also A NOTE ON MULTI BACKEND below.
1160
1161 OneDrive Backend
1162 onedrive://some_dir
1163
1164 Par2 Wrapper Backend
1165 par2+scheme://[user[:password]@]host[:port]/[/]path
1166
1167 See also A NOTE ON PAR2 WRAPPER BACKEND
1168
1169 Public Cloud Archive (OVH)
1170 pca://container_name[/prefix]
1171
1172 See also A NOTE ON PCA ACCESS
1173
1174 pydrive
1175 pydrive://<service account' email
1176 address>@developer.gserviceaccount.com/some_dir
1177
1178 See also A NOTE ON PYDRIVE BACKEND below.
1179
1180 Rclone Backend
1181 rclone://remote:/some_dir
1182
1183 See also A NOTE ON RCLONE BACKEND
1184
1185 Rsync via daemon
1186 rsync://user[:password]@host.com[:port]::[/]module/some_dir
1187
1188 Rsync over ssh (only key auth)
1189 rsync://user@host.com[:port]/[relative|/absolute]_path
1190
1191 S3 storage (Amazon)
1192 s3:///bucket_name[/path]
1193
1194 defaults to the boto3 backend boto3+s3://
1195 alternatively try the legacy boto backend
1196 boto+s3://host[:port]/bucket_name[/path]
1197
1198 For details see A NOTE ON AMAZON S3 below.
1199
1200 SCP/SFTP Secure Copy Protocol/SSH File Transfer Protocol
1201 scp://.. or
1202 sftp://user[:password]@other.host[:port]/[relative|/absolute]_path
1203
1204 defaults are paramiko+scp:// and paramiko+sftp://
1205 alternatively try pexpect+scp://, pexpect+sftp://, lftp+sftp://
1206 See also --ssh-askpass, --ssh-options and A NOTE ON SSH
1207 BACKENDS.
1208
1209 slate
1210 slate://[slate-id]
1211
1212 See also A NOTE ON SLATE BACKEND
1213
1214 Swift (Openstack)
1215 swift://container_name[/prefix]
1216
1217 See also A NOTE ON SWIFT (OPENSTACK OBJECT STORAGE) ACCESS
1218
1219 Tahoe-LAFS
1220 tahoe://alias/directory
1221
1222 WebDAV
1223 webdav[s]://user[:password]@other.host[:port]/some_dir
1224
1225 alternatively try lftp+webdav[s]://
1226
1228 duplicity uses time strings in two places. Firstly, many of the files
1229 duplicity creates will have the time in their filenames in the w3
1230 datetime format as described in a w3 note at http://www.w3.org/TR/NOTE-
1231 datetime. Basically they look like "2001-07-15T04:09:38-07:00", which
1232 means what it looks like. The "-07:00" section means the time zone is
1233 7 hours behind UTC.
1234 Secondly, the -t, --time, and --restore-time options take a time
1235 string, which can be given in any of several formats:
1236 1. the string "now" (refers to the current time)
1237 2. a sequences of digits, like "123456890" (indicating the time in
1238 seconds after the epoch)
1239 3. A string like "2002-01-25T07:00:00+02:00" in datetime format
1240 4. An interval, which is a number followed by one of the characters
1241 s, m, h, D, W, M, or Y (indicating seconds, minutes, hours,
1242 days, weeks, months, or years respectively), or a series of such
1243 pairs. In this case the string refers to the time that preceded
1244 the current time by the length of the interval. For instance,
1245 "1h78m" indicates the time that was one hour and 78 minutes ago.
1246 The calendar here is unsophisticated: a month is always 30 days,
1247 a year is always 365 days, and a day is always 86400 seconds.
1248 5. A date format of the form YYYY/MM/DD, YYYY-MM-DD, MM/DD/YYYY, or
1249 MM-DD-YYYY, which indicates midnight on the day in question,
1250 relative to the current time zone settings. For instance,
1251 "2002/3/5", "03-05-2002", and "2002-3-05" all mean March 5th,
1252 2002.
1253
1255 When duplicity is run, it searches through the given source directory
1256 and backs up all the files specified by the file selection system,
1257 unless --files-from has been specified in which case the passed list of
1258 individual files is used instead.
1259
1260 The file selection system comprises a number of file selection
1261 conditions, which are set using one of the following command line
1262 options:
1263
1264 --exclude
1265 --exclude-device-files
1266 --exclude-if-present
1267 --exclude-filelist
1268 --exclude-regexp
1269 --include
1270 --include-filelist
1271 --include-regexp
1272
1273 For each individual file found in the source directory, the file
1274 selection conditions are checked in the order they are specified on the
1275 command line. Should a selection condition match, the file will be
1276 included or excluded accordingly and the file selection system will
1277 proceed to the next file without checking the remaining conditions.
1278
1279 Earlier arguments therefore take precedence where multiple conditions
1280 match any given file, and are thus usually given in order of decreasing
1281 specificity. If no selection conditions match a given file, then the
1282 file is implicitly included.
1283
1284 For example,
1285
1286 duplicity --include /usr --exclude /usr /usr
1287 scp://user@host/backup
1288
1289 is exactly the same as
1290
1291 duplicity /usr scp://user@host/backup
1292
1293 because the --include directive matches all files in the backup source
1294 directory, and takes precedence over the contradicting --exclude option
1295 as it comes first.
1296
1297 As a more meaningful example,
1298
1299 duplicity --include /usr/local/bin --exclude /usr/local /usr
1300 scp://user@host/backup
1301
1302 would backup the /usr/local/bin directory (and its contents), but not
1303 /usr/local/doc. Note that this is not the same as simply specifying
1304 /usr/local/bin as the backup source, as other files and folders under
1305 /usr will also be (implicitly) included.
1306
1307 The order of the --include and --exclude arguments is important. In the
1308 previous example, if the less specific --exclude directive had
1309 precedence it would prevent the more specific --include from matching
1310 any files.
1311
1312 The patterns passed to the --include, --exclude, --include-filelist,
1313 and --exclude-filelist options are interpretted as extended shell
1314 globbing patterns by default. This behaviour can be changed with the
1315 following filter mode arguments:
1316
1317 --filter-globbing
1318 --filter-literal
1319 --filter-regexp
1320
1321 These arguments change the interpretation of the patterns used in
1322 selection conditions, affecting all subsequent file selection options
1323 passed on the command line. They may be specified multiple times in
1324 order to switch pattern interpretations as needed.
1325
1326 Literal strings differ from globs in that the pattern must match the
1327 filename exactly. This can be useful where filenames contain characters
1328 which have special meaning in shell globs or regular expressions. If
1329 passing dynamically generated file lists to duplicity using the
1330 --include-filelist or --exclude-filelist options, then the use of
1331 --filter-literal is recommended unless regular expression or globbing
1332 is specifically required.
1333
1334 The regular expression language used for selection conditions specified
1335 with --include-regexp , --exclude-regexp , or when --filter-regexp is
1336 in effect is as implemented by the Python standard library.
1337
1338 Extended shell globbing pattenrs may contain: *, **, ?, and [...]
1339 (character ranges). As in a normal shell, * can be expanded to any
1340 string of characters not containing "/", ? expands to any single
1341 character except "/", and [...] expands to a single character of those
1342 characters specified (ranges are acceptable). The pattern ** expands
1343 to any string of characters whether or not it contains "/".
1344
1345 In addition to the above filter mode arguments, the following can be
1346 used in the same fashion to enable (default) or disable case
1347 sensitivity in the evaluation of file sslection conditions:
1348
1349 --filter-ignorecase
1350 --filter-strictcase
1351
1352 An example of filter mode switching including case insensitivity is
1353
1354 --filter-ignorecase --include /usr/bin/*.PY --filter-literal
1355 --filter-include /usr/bin/special?file*name --filter-strictcase
1356 --exclude /usr/bin
1357
1358 which would backup *.py, *.pY, *.Py, and *.PY files under /usr/bin and
1359 also the single literally specified file with globbing characters in
1360 the name. The use of --filter-strictcase is not technically necessary
1361 here, but is included as an example which may (depending on the backup
1362 source path) cause unexpected interactions between --include and
1363 --exclude options, should the directory portion of the path (/usr/bin)
1364 contain any uppercase characters.
1365
1366 If the pattern starts with "ignorecase:" (case insensitive), then this
1367 prefix will be removed and any character in the string can be replaced
1368 with an upper- or lowercase version of itself. This prefix is a legacy
1369 feature supported for shell globbing selection conditions only, but for
1370 backward compatability reasons is otherwise considered part of the
1371 pattern itself (use --filter-ignorecase instead).
1372
1373 Remember that you may need to quote patterns when typing them into a
1374 shell, so the shell does not interpret the globbing patterns or
1375 whitespace characters before duplicity sees them.
1376
1377 Selection patterns should generally be thought of as filesystem paths
1378 rather than arbitrary strings. For selection conditions using extended
1379 shell globbing patterns, the --exclude pattern option matches a file
1380 if:
1381
1382 1. pattern can be expanded into the file's filename, or
1383 2. the file is inside a directory matched by the option.
1384
1385 Conversely, the --include pattern option matches a file if:
1386
1387 1. pattern can be expanded into the file's filename, or
1388 2. the file is inside a directory matched by the option, or
1389 3. the file is a directory which contains a file matched by the
1390 option.
1391
1392 For example,
1393
1394 --exclude /usr/local
1395
1396 matches e.g. /usr/local, /usr/local/lib, and /usr/local/lib/netscape.
1397 It is the same as --exclude /usr/local --exclude '/usr/local/**'.
1398 On the other hand
1399
1400 --include /usr/local
1401
1402 specifies that /usr, /usr/local, /usr/local/lib, and
1403 /usr/local/lib/netscape (but not /usr/doc) all be backed up. Thus you
1404 don't have to worry about including parent directories to make sure
1405 that included subdirectories have somewhere to go.
1406
1407 Finally,
1408
1409 --include ignorecase:'/usr/[a-z0-9]foo/*/**.py'
1410
1411 would match a file like /usR/5fOO/hello/there/world.py. If it did
1412 match anything, it would also match /usr. If there is no existing file
1413 that the given pattern can be expanded into, the option will not match
1414 /usr alone.
1415
1416 This treatment of patterns in globbing and literal selection conditions
1417 as filesystem paths reduces the number of explicit conditions required.
1418 However, it does require that the paths described by all variants of
1419 the --include or --include option are fully specified relative to the
1420 backup source directory.
1421
1422 For selection conditions using literal strings, the same logic applies
1423 except that scenario 1 is for an exact match of the pattern.
1424
1425 For selection conditions using regular expressions the pattern is
1426 evaluated as a regular expression rather than a filesystem path.
1427 Scenario 3 in the above therefore does not apply, the implications of
1428 which are discussed at the end of this section.
1429
1430 The --include-filelist, and --exclude-filelist, options also introduce
1431 file selection conditions. They direct duplicity to read in a text
1432 file (either ASCII or UTF-8), each line of which is a file
1433 specification, and to include or exclude the matching files. Lines are
1434 separated by newlines or nulls, depending on whether the --null-
1435 separator switch was given.
1436
1437 Each line in the filelist will be interpreted as a selection pattern in
1438 the same way --include and --exclude options are interpreted, except
1439 that lines starting with "+ " are interpreted as include directives,
1440 even if found in a filelist referenced by --exclude-filelist.
1441 Similarly, lines starting with "- " exclude files even if they are
1442 found within an include filelist.
1443
1444 For example, if file "list.txt" contains the lines:
1445
1446 /usr/local
1447 - /usr/local/doc
1448 /usr/local/bin
1449 + /var
1450 - /var
1451
1452 then --include-filelist list.txt would include /usr, /usr/local, and
1453 /usr/local/bin. It would exclude /usr/local/doc,
1454 /usr/local/doc/python, etc. It would also include /usr/local/man, as
1455 this is included within /usr/local. Finally, it is undefined what
1456 happens with /var. A single file list should not contain conflicting
1457 file specifications.
1458
1459 Each line in the filelist will be interpreted as per the current filter
1460 mode in the same way --include and --exclude options are interpreted.
1461 For instance, if the file "list.txt" contains the lines:
1462
1463 dir/foo
1464 + dir/bar
1465 - **
1466
1467 Then --include-filelist list.txt would be exactly the same as
1468 specifying --include dir/foo --include dir/bar --exclude ** on the
1469 command line.
1470
1471 Note that specifying very large numbers numbers of selection rules as
1472 filelists can incur a substantial performance penalty as these rules
1473 will (potentially) be checked for every file in the backup source
1474 directory. If you need to backup arbitrary lists of specific files
1475 (i.e. not described by regexp patterns or shell globs) then --files-
1476 from is likely to be more performant.
1477
1478 Finally, the --include-regexp and --exclude-regexp options allow files
1479 to be included and excluded if their filenames match a regular
1480 expression. Regular expression syntax is too complicated to explain
1481 here, but is covered in Python's library reference. Unlike the
1482 --include and --exclude options, the regular expression options don't
1483 match files containing or contained in matched files. So for instance
1484
1485 --include-regexp '[0-9]{7}(?!foo)'
1486
1487 matches any files whose full pathnames contain 7 consecutive digits
1488 which aren't followed by 'foo'. However, it wouldn't match /home even
1489 if /home/ben/1234567 existed.
1490
1492 1. The API Keys used for Amazon Drive have not been granted
1493 production limits. Amazon do not say what the development
1494 limits are and are not replying to to requests to whitelist
1495 duplicity. A related tool, acd_cli, was demoted to development
1496 limits, but continues to work fine except for cases of excessive
1497 usage. If you experience throttling and similar issues with
1498 Amazon Drive using this backend, please report them to the
1499 mailing list.
1500 2. If you previously used the acd+acdcli backend, it is strongly
1501 recommended to update to the ad backend instead, since it
1502 interfaces directly with Amazon Drive. You will need to setup
1503 the OAuth once again, but can otherwise keep your backups and
1504 config.
1505
1507 When backing up to Amazon S3, two backend implementations are
1508 available. The older boto library, which is deprecated and is no
1509 longer maintained. And the recent boto3 backend based on the newer
1510 boto3 library. The new backend fixes several known limitations in the
1511 older backend, which developed as Amazon S3 evolved.
1512
1513 The boto3 backend should behave largely the same as the older backend,
1514 but there are some differences in the supported "--s3-..." options.
1515 Additionally, there are some compatibility differences.
1516 See the documentation of each option above regarding differences
1517 related to each backend.
1518
1519 The boto3 backend does not support bucket creation. This deliberate
1520 choice simplifies the code, and side steps problems related to region
1521 selection. Additionally, it is probably not a good practice to give
1522 your backup role bucket creation rights. In most cases the role used
1523 for backups should probably be limited to specific buckets.
1524
1525 The boto3 backend only supports newer domain style buckets. Amazon is
1526 moving to deprecate the older bucket style, so migration is
1527 recommended. Use the boto backend for compatibility with buckets using
1528 older naming conventions.
1529
1530 The boto3 backend does not currently support initiating restores from
1531 the glacier storage class. When restoring a backup from glacier or
1532 glacier deep archive, the backup files must first be restored out of
1533 band. There are multiple options when restoring backups from cold
1534 storage, which vary in both cost and speed. See Amazon's documentation
1535 for details.
1536
1537 Both backends use environment variables for authentification:
1538 AWS_ACCESS_KEY_ID (required),
1539 AWS_SECRET_ACCESS_KEY (required)
1540 or
1541 BOTO_CONFIG (required) pointing to a boto config file.
1542 For simplicity's sake we will document the use of the AWS_* vars only.
1543 Research boto documentation available in the web if you want to use
1544 the config file.
1545
1546 boto3 backend example backup command line:
1547
1548 AWS_ACCESS_KEY_ID=<key_id> AWS_SECRET_ACCESS_KEY=<access_key>
1549 duplicity /some/path s3:///bucket/subfolder
1550
1551 you may add --s3-endpoint-url (to access non Amazon S3 services or
1552 regional endpoints) and may need --s3-region-name (for buckets created
1553 in specific regions) and other --s3-... options documented above.
1554
1555 legacy boto backend example backup command line:
1556
1557 AWS_ACCESS_KEY_ID=<key_id> AWS_SECRET_ACCESS_KEY=<access_key>
1558 duplicity /some/path boto+s3://[host:port]/bucket/subfolder
1559
1560 The url host setting is optional and allows to define a custom endpoint
1561 host. you may add --s3-european-buckets and other s3 options documented
1562 above if needed.
1563
1564
1566 The Azure backend requires the Microsoft Azure Storage Blobs client
1567 library for Python to be installed on the system. See REQUIREMENTS.
1568
1569 It uses the environment variable AZURE_CONNECTION_STRING (required).
1570 This string contains all necessary information such as Storage Account
1571 name and the key for authentication. You can find it under Access Keys
1572 for the storage account.
1573
1574 Duplicity will take care to create the container when performing the
1575 backup. Do not create it manually before.
1576
1577 A container name (as given as the backup url) must be a valid DNS name,
1578 conforming to the following naming rules:
1579
1580 1. Container names must start with a letter or number, and
1581 can contain only letters, numbers, and the dash (-)
1582 character.
1583 2. Every dash (-) character must be immediately preceded and
1584 followed by a letter or number; consecutive dashes are
1585 not permitted in container names.
1586 3. All letters in a container name must be lowercase.
1587 4. Container names must be from 3 through 63 characters
1588 long.
1589
1590 These rules come from Azure; see https://docs.microsoft.com/en-
1591 us/rest/api/storageservices/naming-and-referencing-
1592 containers--blobs--and-metadata
1593
1595 The box backend requires boxsdk with jwt support to be installed on the
1596 system. See REQUIREMENTS.
1597
1598 It uses the environment variable BOX_CONFIG_PATH (optional). This
1599 string contains the path to box custom app's config.json. Either this
1600 environment variable or the config query parameter in the url need to
1601 be specified, if both are specified, query paramter takes precedence.
1602
1603 Create a Box custom app
1604 In order to use box backend, user need to create a box custom app in
1605 the box developer console (https://app.box.com/developers/console).
1606
1607 After create a new custom app, please make sure it is configured as
1608 follow:
1609
1610 1. Choose "App Access Only" for "App Access Level"
1611 2. Check "Write all files and folders stored in Box"
1612 3. Generate a Public/Private Keypair
1613
1614 The user also need to grant the created custom app permission in the
1615 admin console (https://app.box.com/master/custom-apps) by clicking the
1616 "+" button and enter the client_id which can be found on the custom
1617 app's configuration page.
1618
1620 Pyrax is Rackspace's next-generation Cloud management API, including
1621 Cloud Files access. The cfpyrax backend requires the pyrax library to
1622 be installed on the system. See REQUIREMENTS.
1623
1624 Cloudfiles is Rackspace's now deprecated implementation of OpenStack
1625 Object Storage protocol. Users wishing to use Duplicity with Rackspace
1626 Cloud Files should migrate to the new Pyrax plugin to ensure support.
1627
1628 The backend requires python-cloudfiles to be installed on the system.
1629 See REQUIREMENTS.
1630
1631 It uses three environment variables for authentification:
1632 CLOUDFILES_USERNAME (required), CLOUDFILES_APIKEY (required),
1633 CLOUDFILES_AUTHURL (optional)
1634
1635 If CLOUDFILES_AUTHURL is unspecified it will default to the value
1636 provided by python-cloudfiles, which points to rackspace, hence this
1637 value must be set in order to use other cloud files providers.
1638
1640 1. First of all Dropbox backend requires valid authentication
1641 token. It should be passed via DPBX_ACCESS_TOKEN environment
1642 variable.
1643 To obtain it please create 'Dropbox API' application at:
1644 https://www.dropbox.com/developers/apps/create
1645 Then visit app settings and just use 'Generated access token'
1646 under OAuth2 section.
1647 Alternatively you can let duplicity generate access token
1648 itself. In such case temporary export DPBX_APP_KEY ,
1649 DPBX_APP_SECRET using values from app settings page and run
1650 duplicity interactively.
1651 It will print the URL that you need to open in the browser to
1652 obtain OAuth2 token for the application. Just follow on-screen
1653 instructions and then put generated token to DPBX_ACCESS_TOKEN
1654 variable. Once done, feel free to unset DPBX_APP_KEY and
1655 DPBX_APP_SECRET
1656
1657 2. "some_dir" must already exist in the Dropbox folder. Depending
1658 on access token kind it may be:
1659 Full Dropbox: path is absolute and starts from 'Dropbox'
1660 root folder.
1661 App Folder: path is related to application folder.
1662 Dropbox client will show it in ~/Dropbox/Apps/<app-name>
1663
1664 3. When using Dropbox for storage, be aware that all files,
1665 including the ones in the Apps folder, will be synced to all
1666 connected computers. You may prefer to use a separate Dropbox
1667 account specially for the backups, and not connect any computers
1668 to that account. Alternatively you can configure selective sync
1669 on all computers to avoid syncing of backup files
1670
1672 Amazon S3 provides the ability to choose the location of a bucket upon
1673 its creation. The purpose is to enable the user to choose a location
1674 which is better located network topologically relative to the user,
1675 because it may allow for faster data transfers.
1676 duplicity will create a new bucket the first time a bucket access is
1677 attempted. At this point, the bucket will be created in Europe if
1678 --s3-european-buckets was given. For reasons having to do with how the
1679 Amazon S3 service works, this also requires the use of the --s3-use-
1680 new-style option. This option turns on subdomain based bucket
1681 addressing in S3. The details are beyond the scope of this man page,
1682 but it is important to know that your bucket must not contain upper
1683 case letters or any other characters that are not valid parts of a
1684 hostname. Consequently, for reasons of backwards compatibility, use of
1685 subdomain based bucket addressing is not enabled by default.
1686 Note that you will need to use --s3-use-new-style for all operations on
1687 European buckets; not just upon initial creation.
1688 You only need to use --s3-european-buckets upon initial creation, but
1689 you may may use it at all times for consistency.
1690 Further note that when creating a new European bucket, it can take a
1691 while before the bucket is fully accessible. At the time of this
1692 writing it is unclear to what extent this is an expected feature of
1693 Amazon S3, but in practice you may experience timeouts, socket errors
1694 or HTTP errors when trying to upload files to your newly created
1695 bucket. Give it a few minutes and the bucket should function normally.
1696
1698 Filename prefixes can be used in multi backend with mirror mode to
1699 define affinity rules. They can also be used in conjunction with S3
1700 lifecycle rules to transition archive files to Glacier, while keeping
1701 metadata (signature and manifest files) on S3.
1702
1703 Duplicity does not require access to archive files except when
1704 restoring from backup.
1705
1707 Overview
1708 Duplicity access to GCS currently relies on it's Interoperability API
1709 (basically S3 for GCS). This needs to actively be enabled before
1710 access is possible. For details read the next section Preparations
1711 below.
1712 Two backends are available to access S3 namely boto3 which is used via
1713 s3:// (alias for boto3+s3:// ) and the legacy boto backend, usable via
1714 boto+s3://.
1715
1716 Preparations
1717 1. login on https://console.cloud.google.com/
1718 2. go to Cloud Storage->Settings->Interoperability
1719 3. create a Service account (if needed)
1720 4. create Service account HMAC access key and secret (!!instantly
1721 copy!! the secret, it can NOT be recovered later)
1722 5. go to Cloud Storage->Browser
1723 6. create a bucket
1724 7. add permissions for Service account that was used to set up
1725 Interoperability access above
1726
1727 Once set up you can use the generated Interoperable Storage Access key
1728 and secret and pass them to duplicity as described in the next section.
1729
1730 Usage
1731 The following examples show accessing GCS via S3 for a collection-
1732 status action. The shown env vars, options and url format can be
1733 applied for all other actions as well of course.
1734
1735 using boto3 supplying the --s3-endpoint-url manually.
1736
1737 AWS_ACCESS_KEY_ID=<keyid> AWS_SECRET_ACCESS_KEY=<secret>
1738 duplicity collection-status s3://<bucket>/<folder>
1739 --s3-endpoint-url=https://storage.googleapis.com
1740
1741 or alternatively with legacy boto using either boto+gs://.
1742
1743 GS_ACCESS_KEY_ID=<keyid> GS_SECRET_ACCESS_KEY=<secret> duplicity
1744 collection-status boto+gs://<bucket>/<folder>
1745
1746 NOTE: The auth env vars are prefixed GS_ in this case!
1747
1748 or boto+s3:// supplying the --s3-endpoint-url manually.
1749
1750 AWS_ACCESS_KEY_ID=<keyid> AWS_SECRET_ACCESS_KEY=<secret>
1751 duplicity collection-status s3://<bucket>/<folder>
1752 --s3-endpoint-url=https://storage.googleapis.com
1753
1754 Alternatively, you can run gsutil config -a to have the Google Cloud
1755 Storage utility populate the ~/.boto configuration file.
1756
1757 NOTE: Also see section URL FORMAT for a brief overview about the url
1758 format expected.
1759
1761 GDrive: is a rewritten PyDrive: backend with less dependencies, and a
1762 simpler setup - it uses the JSON keys downloaded directly from Google
1763 Cloud Console.
1764
1765 Note Google has 2 drive methods, `Shared(previously Team) Drives` and
1766 `My Drive`, both can be shared but require different addressing
1767
1768 For a Google Shared Drives folder
1769
1770 Share Drive ID specified as a query parameter, driveID, in the backend
1771 URL. Example:
1772 gdrive://developer.gserviceaccount.com/target-
1773 folder/?driveID=<SHARED DRIVE ID>
1774
1775 For a Google My Drive based shared folder
1776
1777 MyDrive folder ID specified as a query parameter, myDriveFolderID, in
1778 the backend URL Example
1779 export GOOGLE_SERVICE_ACCOUNT_URL=<serviceaccount-
1780 name>@<serviceaccount-name>.iam.gserviceaccount.com
1781 gdrive://${GOOGLE_SERVICE_ACCOUNT_URL}/<target-folder-name-in-
1782 myDriveFolder>?myDriveFolderID=root
1783
1784
1785 There are also two ways to authenticate to use GDrive: with a regular
1786 account or with a "service account". With a service account, a separate
1787 account is created, that is only accessible with Google APIs and not a
1788 web login. With a regular account, you can store backups in your
1789 normal Google Drive.
1790
1791 To use a service account, go to the Google developers console at
1792 https://console.developers.google.com. Create a project, and make sure
1793 Drive API is enabled for the project. In the "Credentials" section,
1794 click "Create credentials", then select Service Account with JSON key.
1795
1796 The GOOGLE_SERVICE_JSON_FILE environment variable needs to contain the
1797 path to the JSON file on duplicity invocation.
1798
1799 export GOOGLE_SERVICE_JSON_FILE=<path-to-serviceaccount-
1800 credentials.json>
1801
1802
1803 The alternative is to use a regular account. To do this, start as
1804 above, but when creating a new Client ID, select "Create OAuth client
1805 ID", with application type of "Desktop app". Download the
1806 client_secret.json file for the new client, and set the
1807 GOOGLE_CLIENT_SECRET_JSON_FILE environment variable to the path to this
1808 file, and GOOGLE_CREDENTIALS_FILE to a path to a file where duplicity
1809 will keep the authentication token - this location must be writable.
1810
1811 NOTE: As a sanity check, GDrive checks the host and username from the
1812 URL against the JSON key, and refuses to proceed if the addresses do
1813 not match. Either the email (for the service accounts) or Client ID
1814 (for regular OAuth accounts) must be present in the URL. See URL FORMAT
1815 above.
1816
1817 First run / OAuth 2.0 authorization
1818 During the first run, you will be prompted to visit an URL in your
1819 browser to grant access to your Google Drive. A temporary HTTP-service
1820 will be started on a local network interface for this purpose (by
1821 default on http://localhost:8080/). Ip-address/host and port can be
1822 adjusted if need be by providing the environment variables
1823 GOOGLE_OAUTH_LOCAL_SERVER_HOST, GOOGLE_OAUTH_LOCAL_SERVER_PORT
1824 respectively.
1825
1826 If you are running duplicity in a remote location, you will need to
1827 make sure that you will be able to access the above HTTP-service with a
1828 browser utilizing e.g. port forwarding or temporary firewall
1829 permission.
1830
1831 The access credentials will be saved in the JSON file mentioned above
1832 for future use after a successful authorization.
1833
1835 The hubic backend requires the pyrax library to be installed on the
1836 system. See REQUIREMENTS. You will need to set your credentials for
1837 hubiC in a file called ~/.hubic_credentials, following this pattern:
1838 [hubic]
1839 email = your_email
1840 password = your_password
1841 client_id = api_client_id
1842 client_secret = api_secret_key
1843 redirect_uri = http://localhost/
1844
1846 An IMAP account can be used as a target for the upload. The userid may
1847 be specified and the password will be requested.
1848 The from_address_prefix may be specified (and probably should be). The
1849 text will be used as the "From" address in the IMAP server. Then on a
1850 restore (or list) command the from_address_prefix will distinguish
1851 between different backups.
1852
1854 This backend requires mediafire python library to be installed on the
1855 system. See REQUIREMENTS.
1856
1857 Use URL escaping for username (and password, if provided via command
1858 line):
1859
1860 mf://duplicity%40example.com@mediafire.com/some_folder
1861 The destination folder will be created for you if it does not exist.
1862
1864 The multi backend allows duplicity to combine the storage available in
1865 more than one backend store (e.g., you can store across a google drive
1866 account and a onedrive account to get effectively the combined storage
1867 available in both). The URL path specifies a JSON formated config file
1868 containing a list of the backends it will use. The URL may also specify
1869 "query" parameters to configure overall behavior. Each element of the
1870 list must have a "url" element, and may also contain an optional
1871 "description" and an optional "env" list of environment variables used
1872 to configure that backend.
1873 Query Parameters
1874 Query parameters come after the file URL in standard HTTP format for
1875 example:
1876 multi:///path/to/config.json?mode=mirror&onfail=abort
1877 multi:///path/to/config.json?mode=stripe&onfail=continue
1878 multi:///path/to/config.json?onfail=abort&mode=stripe
1879 multi:///path/to/config.json?onfail=abort
1880 Order does not matter, however unrecognized parameters are considered
1881 an error.
1882 mode=stripe
1883 This mode (the default) performs round-robin access to the list
1884 of backends. In this mode, all backends must be reliable as a
1885 loss of one means a loss of one of the archive files.
1886 mode=mirror
1887 This mode accesses backends as a RAID1-store, storing every file
1888 in every backend and reading files from the first-successful
1889 backend. A loss of any backend should result in no failure.
1890 Note that backends added later will only get new files and may
1891 require a manual sync with one of the other operating ones.
1892 onfail=continue
1893 This setting (the default) continues all write operations in as
1894 best-effort. Any failure results in the next backend tried.
1895 Failure is reported only when all backends fail a given
1896 operation with the error result from the last failure.
1897 onfail=abort
1898 This setting considers any backend write failure as a
1899 terminating condition and reports the error. Data reading and
1900 listing operations are independent of this and will try with the
1901 next backend on failure.
1902 JSON File Example
1903 [
1904 {
1905 "description": "a comment about the backend"
1906 "url": "abackend://myuser@domain.com/backup",
1907 "env": [
1908 {
1909 "name" : "MYENV",
1910 "value" : "xyz"
1911 },
1912 {
1913 "name" : "FOO",
1914 "value" : "bar"
1915 }
1916 ],
1917 "prefixes": ["prefix1_", "prefix2_"]
1918 },
1919 {
1920 "url": "file:///path/to/dir"
1921 }
1922 ]
1923
1925 Par2 Wrapper Backend can be used in combination with all other backends
1926 to create recovery files. Just add par2+ before a regular scheme (e.g.
1927 par2+ftp://user@host/dir or par2+s3+http://bucket_name ). This will
1928 create par2 recovery files for each archive and upload them all to the
1929 wrapped backend.
1930 Before restoring, archives will be verified. Corrupt archives will be
1931 repaired on the fly if there are enough recovery blocks available.
1932 Use --par2-redundancy percent to adjust the size (and redundancy) of
1933 recovery files in percent.
1934
1936 PCA is a long-term data archival solution by OVH. It runs a slightly
1937 modified version of Openstack Swift introducing latency in the data
1938 retrieval process. It is a good pick for a multi backend configuration
1939 where receiving volumes while an other backend is used to store
1940 manifests and signatures.
1941
1942 The backend requires python-switclient to be installed on the system.
1943 python-keystoneclient is also needed to interact with OpenStack's
1944 Keystone Identity service. See REQUIREMENTS.
1945
1946 It uses following environment variables for authentification:
1947 PCA_USERNAME (required), PCA_PASSWORD (required), PCA_AUTHURL
1948 (required), PCA_USERID (optional), PCA_TENANTID (optional, but either
1949 the tenant name or tenant id must be supplied) PCA_REGIONNAME
1950 (optional), PCA_TENANTNAME (optional, but either the tenant name or
1951 tenant id must be supplied)
1952
1953 If the user was previously authenticated, the following environment
1954 variables can be used instead: PCA_PREAUTHURL (required),
1955 PCA_PREAUTHTOKEN (required)
1956
1957 If PCA_AUTHVERSION is unspecified, it will default to version 2.
1958
1960 The pydrive backend requires Python PyDrive package to be installed on
1961 the system. See REQUIREMENTS.
1962
1963 There are two ways to use PyDrive: with a regular account or with a
1964 "service account". With a service account, a separate account is
1965 created, that is only accessible with Google APIs and not a web login.
1966 With a regular account, you can store backups in your normal Google
1967 Drive.
1968
1969 To use a service account, go to the Google developers console at
1970 https://console.developers.google.com. Create a project, and make sure
1971 Drive API is enabled for the project. Under "APIs and auth", click
1972 Create New Client ID, then select Service Account with P12 key.
1973
1974 Download the .p12 key file of the account and convert it to the .pem
1975 format:
1976 openssl pkcs12 -in XXX.p12 -nodes -nocerts > pydriveprivatekey.pem
1977
1978 The content of .pem file should be passed to GOOGLE_DRIVE_ACCOUNT_KEY
1979 environment variable for authentification.
1980
1981 The email address of the account will be used as part of URL. See URL
1982 FORMAT above.
1983
1984 The alternative is to use a regular account. To do this, start as
1985 above, but when creating a new Client ID, select "Installed
1986 application" of type "Other". Create a file with the following content,
1987 and pass its filename in the GOOGLE_DRIVE_SETTINGS environment
1988 variable:
1989 client_config_backend: settings
1990 client_config:
1991 client_id: <Client ID from developers' console>
1992 client_secret: <Client secret from developers' console>
1993 save_credentials: True
1994 save_credentials_backend: file
1995 save_credentials_file: <filename to cache credentials>
1996 get_refresh_token: True
1997
1998 In this scenario, the username and host parts of the URL play no role;
1999 only the path matters. During the first run, you will be prompted to
2000 visit an URL in your browser to grant access to your drive. Once
2001 granted, you will receive a verification code to paste back into
2002 Duplicity. The credentials are then cached in the file references above
2003 for future use.
2004
2006 Rclone is a powerful command line program to sync files and directories
2007 to and from various cloud storage providers.
2008
2009 Usage
2010 Once you have configured an rclone remote via
2011
2012 rclone config
2013
2014 and successfully set up a remote (e.g. gdrive for Google Drive),
2015 assuming you can list your remote files with
2016
2017 rclone ls gdrive:mydocuments
2018
2019 you can start your backup with
2020
2021 duplicity /mydocuments rclone://gdrive:/mydocuments
2022
2023 Please note the slash after the second colon. Some storage provider
2024 will work with or without slash after colon, but some other will not.
2025 Since duplicity will complain about malformed URL if a slash is not
2026 present, always put it after the colon, and the backend will handle it
2027 for you.
2028
2029 Options
2030 Note that all rclone options can be set by env vars as well. This is
2031 properly documented here
2032
2033 https://rclone.org/docs/
2034
2035 but in a nutshell you need to take the long option name, strip the
2036 leading --, change - to _, make upper case and prepend RCLONE_. for
2037 example
2038
2039 the equivalent of '--stats 5s' would be the env var
2040 RCLONE_STATS=5s
2041
2043 Three environment variables are used with the slate backend:
2044 1. `SLATE_API_KEY` - Your slate API key
2045 2. `SLATE_SSL_VERIFY` - either '1'(True) or '0'(False) for ssl
2046 verification (optional - True by default)
2047 3. `PASSPHRASE` - your gpg passhprase for encryption (optional -
2048 will be prompted if not set or not used at all if using the `--no-
2049 encryption` parameter)
2050
2051 To use the slate backend, use the following scheme:
2052 slate://[slate-id]
2053
2054 e.g. Full backup of current directory to slate:
2055 duplicity full . "slate://6920df43-5c3w-2x7i-69aw-2390567uav75"
2056
2057 Here's a demo:
2058 https://gitlab.com/Shr1ftyy/duplicity/uploads/675664ef0eb431d14c8e20045e3fafb6/slate_demo.mp4
2059
2061 The ssh backends support sftp and scp/ssh transport protocols. This is
2062 a known user-confusing issue as these are fundamentally different. If
2063 you plan to access your backend via one of those please inform yourself
2064 about the requirements for a server to support sftp or scp/ssh access.
2065 To make it even more confusing the user can choose between several ssh
2066 backends via a scheme prefix: paramiko+ (default), pexpect+, lftp+... .
2067 paramiko & pexpect support --use-scp, --ssh-askpass and --ssh-options.
2068 Only the pexpect backend allows to define --scp-command and --sftp-
2069 command.
2070 SSH paramiko backend (default) is a complete reimplementation of ssh
2071 protocols natively in python. Advantages are speed and maintainability.
2072 Minor disadvantage is that extra packages are needed as listed in
2073 REQUIREMENTS. In sftp (default) mode all operations are done via the
2074 according sftp commands. In scp mode ( --use-scp ) though scp access is
2075 used for put/get operations but listing is done via ssh remote shell.
2076 SSH pexpect backend is the legacy ssh backend using the command line
2077 ssh binaries via pexpect. Older versions used scp for get and put
2078 operations and sftp for list and delete operations. The current
2079 version uses sftp for all four supported operations, unless the --use-
2080 scp option is used to revert to old behavior.
2081 SSH lftp backend is simply there because lftp can interact with the ssh
2082 cmd line binaries. It is meant as a last resort in case the above
2083 options fail for some reason.
2084
2085 Why use sftp instead of scp?
2086 The change to sftp was made in order to allow the remote system to
2087 chroot the backup, thus providing better security and because it does
2088 not suffer from shell quoting issues like scp. Scp also does not
2089 support any kind of file listing, so sftp or ssh access will always be
2090 needed in addition for this backend mode to work properly. Sftp does
2091 not have these limitations but needs an sftp service running on the
2092 backend server, which is sometimes not an option.
2093
2095 Certificate verification as implemented right now [02.2016] only in the
2096 webdav and lftp backends. older pythons 2.7.8- and older lftp binaries
2097 need a file based database of certification authority certificates
2098 (cacert file).
2099 Newer python 2.7.9+ and recent lftp versions however support the system
2100 default certificates (usually in /etc/ssl/certs) and also giving an
2101 alternative ca cert folder via --ssl-cacert-path.
2102 The cacert file has to be a PEM formatted text file as currently
2103 provided by the CURL project. See
2104 http://curl.haxx.se/docs/caextract.html
2105 After creating/retrieving a valid cacert file you should copy it to
2106 either
2107 ~/.duplicity/cacert.pem
2108 ~/duplicity_cacert.pem
2109 /etc/duplicity/cacert.pem
2110 Duplicity searches it there in the same order and will fail if it can't
2111 find it. You can however specify the option --ssl-cacert-file <file>
2112 to point duplicity to a copy in a different location.
2113 Finally there is the --ssl-no-check-certificate option to disable
2114 certificate verification alltogether, in case some ssl library is
2115 missing or verification is not wanted. Use it with care, as even with
2116 self signed servers manually providing the private ca certificate is
2117 definitely the safer option.
2118
2120 Swift is the OpenStack Object Storage service.
2121 The backend requires python-switclient to be installed on the system.
2122 python-keystoneclient is also needed to use OpenStack's Keystone
2123 Identity service. See REQUIREMENTS.
2124
2125 It uses following environment variables for authentification:
2126
2127 SWIFT_USERNAME (required),
2128 SWIFT_PASSWORD (required),
2129 SWIFT_AUTHURL (required),
2130 SWIFT_TENANTID or SWIFT_TENANTNAME (required with
2131 SWIFT_AUTHVERSION=2, can alternatively be defined in
2132 SWIFT_USERNAME like e.g. SWIFT_USERNAME="tenantname:user"),
2133 SWIFT_PROJECT_ID or SWIFT_PROJECT_NAME (required with
2134 SWIFT_AUTHVERSION=3),
2135 SWIFT_USERID (optional, required only for IBM Bluemix
2136 ObjectStorage),
2137 SWIFT_REGIONNAME (optional).
2138
2139 If the user was previously authenticated, the following environment
2140 variables can be used instead: SWIFT_PREAUTHURL (required),
2141 SWIFT_PREAUTHTOKEN (required)
2142
2143 If SWIFT_AUTHVERSION is unspecified, it will default to version 1.
2144
2146 Signing and symmetrically encrypt at the same time with the gpg binary
2147 on the command line, as used within duplicity, is a specifically
2148 challenging issue. Tests showed that the following combinations proved
2149 working.
2150 1. Setup gpg-agent properly. Use the option --use-agent and enter both
2151 passphrases (symmetric and sign key) in the gpg-agent's dialog.
2152 2. Use a PASSPHRASE for symmetric encryption of your choice but the
2153 signing key has an empty passphrase.
2154 3. The used PASSPHRASE for symmetric encryption and the passphrase of
2155 the signing key are identical.
2156
2158 Hard links currently unsupported (they will be treated as non-linked
2159 regular files).
2160
2161 Bad signatures will be treated as empty instead of logging appropriate
2162 error message.
2163
2165 This section describes duplicity's basic operation and the format of
2166 its data files. It should not necessary to read this section to use
2167 duplicity.
2168
2169 The files used by duplicity to store backup data are tarfiles in GNU
2170 tar format. They can be produced independently by rdiffdir(1). For
2171 incremental backups, new files are saved normally in the tarfile. But
2172 when a file changes, instead of storing a complete copy of the file,
2173 only a diff is stored, as generated by rdiff(1). If a file is deleted,
2174 a 0 length file is stored in the tar. It is possible to restore a
2175 duplicity archive "manually" by using tar and then cp, rdiff, and rm as
2176 necessary. These duplicity archives have the extension difftar.
2177
2178 Both full and incremental backup sets have the same format. In effect,
2179 a full backup set is an incremental one generated from an empty
2180 signature (see below). The files in full backup sets will start with
2181 duplicity-full while the incremental sets start with duplicity-inc.
2182 When restoring, duplicity applies patches in order, so deleting, for
2183 instance, a full backup set may make related incremental backup sets
2184 unusable.
2185
2186 In order to determine which files have been deleted, and to calculate
2187 diffs for changed files, duplicity needs to process information about
2188 previous sessions. It stores this information in the form of tarfiles
2189 where each entry's data contains the signature (as produced by rdiff)
2190 of the file instead of the file's contents. These signature sets have
2191 the extension sigtar.
2192
2193 Signature files are not required to restore a backup set, but without
2194 an up-to-date signature, duplicity cannot append an incremental backup
2195 to an existing archive.
2196
2197 To save bandwidth, duplicity generates full signature sets and
2198 incremental signature sets. A full signature set is generated for each
2199 full backup, and an incremental one for each incremental backup. These
2200 start with duplicity-full-signatures and duplicity-new-signatures
2201 respectively. These signatures will be stored both locally and
2202 remotely. The remote signatures will be encrypted if encryption is
2203 enabled. The local signatures will not be encrypted and stored in the
2204 archive dir (see --archive-dir ).
2205
2207 Duplicity requires a POSIX-like operating system with a python
2208 interpreter version 2.6+ installed. It is best used under GNU/Linux.
2209
2210 Some backends also require additional components (probably available as
2211 packages for your specific platform):
2212 Amazon Drive backend
2213 python-requests - http://python-requests.org
2214 python-requests-oauthlib - https://github.com/requests/requests-
2215 oauthlib
2216 azure backend (Azure Storage Blob Service)
2217 Microsoft Azure Storage Blobs client library for Python -
2218 https://pypi.org/project/azure-storage-blob/
2219 boto backend (S3 Amazon Web Services, Google Cloud Storage) (legacy)
2220 boto version 2.49 (2018/07/11) - http://github.com/boto/boto
2221 boto3 backend (S3 Amazon Web Services, Google Cloud Storage) (default)
2222 boto3 version 1.x - https://github.com/boto/boto3
2223 box backend (box.com)
2224 boxsdk - https://github.com/box/box-python-sdk
2225 cfpyrax backend (Rackspace Cloud) and hubic backend (hubic.com)
2226 Rackspace CloudFiles Pyrax API -
2227 http://docs.rackspace.com/sdks/guide/content/python.html
2228 dpbx backend (Dropbox)
2229 Dropbox Python SDK -
2230 https://www.dropbox.com/developers/reference/sdk
2231 gdocs gdata backend (legacy)
2232 Google Data APIs Python Client Library -
2233 http://code.google.com/p/gdata-python-client/
2234 gdocs pydrive backend(default)
2235 see pydrive backend
2236 gio backend (Gnome VFS API)
2237 PyGObject - http://live.gnome.org/PyGObject
2238 D-Bus (dbus)- http://www.freedesktop.org/wiki/Software/dbus
2239 lftp backend (needed for ftp, ftps, fish [over ssh] - also supports
2240 sftp, webdav[s])
2241 LFTP Client - http://lftp.yar.ru/
2242 MEGA backend (only works for accounts created prior to November 2018)
2243 (mega.nz)
2244 megatools client - https://github.com/megous/megatools
2245 MEGA v2 and v3 backend (works for all MEGA accounts) (mega.nz)
2246 MEGAcmd client - https://mega.nz/cmd
2247 multi backend
2248 Multi -- store to more than one backend
2249 (also see A NOTE ON MULTI BACKEND ) below.
2250 ncftp backend (ftp, select via ncftp+ftp://)
2251 NcFTP - http://www.ncftp.com/
2252 OneDrive backend (Microsoft OneDrive)
2253 python-requests-oauthlib - https://github.com/requests/requests-
2254 oauthlib
2255 Par2 Wrapper Backend
2256 par2cmdline - http://parchive.sourceforge.net/
2257 pydrive backend
2258 PyDrive -- a wrapper library of google-api-python-client -
2259 https://pypi.python.org/pypi/PyDrive
2260 (also see A NOTE ON PYDRIVE BACKEND ) below.
2261 rclone backend
2262 rclone - https://rclone.org/
2263 rsync backend
2264 rsync client binary - http://rsync.samba.org/
2265 ssh paramiko backend (default)
2266 paramiko (SSH2 for python) -
2267 http://pypi.python.org/pypi/paramiko (downloads);
2268 http://github.com/paramiko/paramiko (project page)
2269 pycrypto (Python Cryptography Toolkit) -
2270 http://www.dlitz.net/software/pycrypto/
2271 ssh pexpect backend(legacy)
2272 sftp/scp client binaries OpenSSH - http://www.openssh.com/
2273 Python pexpect module -
2274 http://pexpect.sourceforge.net/pexpect.html
2275 swift backend (OpenStack Object Storage)
2276 Python swiftclient module - https://github.com/openstack/python-
2277 swiftclient/
2278 Python keystoneclient module -
2279 https://github.com/openstack/python-keystoneclient/
2280 webdav backend
2281 certificate authority database file for ssl certificate
2282 verification of HTTPS connections -
2283 http://curl.haxx.se/docs/caextract.html
2284 (also see A NOTE ON SSL CERTIFICATE VERIFICATION).
2285 Python kerberos module for kerberos authentication -
2286 https://github.com/02strich/pykerberos
2287 MediaFire backend
2288 MediaFire Python Open SDK -
2289 https://pypi.python.org/pypi/mediafire/
2290
2292 Original Author - Ben Escoto <bescoto@stanford.edu>
2293 Current Maintainer - Kenneth Loafman <kenneth@loafman.com>
2294 Continuous Contributors
2295 Edgar Soldin, Mike Terry
2296 Most backends were contributed individually. Information about their
2297 authorship may be found in the according file's header.
2298 Also we'd like to thank everybody posting issues to the mailing list or
2299 on launchpad, sending in patches or contributing otherwise. Duplicity
2300 wouldn't be as stable and useful if it weren't for you.
2301 A special thanks goes to rsync.net, a Cloud Storage provider with
2302 explicit support for duplicity, for several monetary donations and for
2303 providing a special "duplicity friends" rate for their offsite backup
2304 service. Email info@rsync.net for details.
2305
2307 rdiffdir(1), python(1), rdiff(1), rdiff-backup(1).
2308
2309
2310
2311Version 1.2.2 January 26, 2023 DUPLICITY(1)