1rclone(1) rclone(1)
2
3
4
6 [IMAGE: Logo (https://rclone.org/img/rclone-120x120.png)]
7 (https://rclone.org/)
8
9 Rclone is a command line program to sync files and directories to and
10 from:
11
12 · Alibaba Cloud (Aliyun) Object Storage System (OSS)
13
14 · Amazon Drive (See note (/amazonclouddrive/#status))
15
16 · Amazon S3
17
18 · Backblaze B2
19
20 · Box
21
22 · Ceph
23
24 · DigitalOcean Spaces
25
26 · Dreamhost
27
28 · Dropbox
29
30 · FTP
31
32 · Google Cloud Storage
33
34 · Google Drive
35
36 · HTTP
37
38 · Hubic
39
40 · Jottacloud
41
42 · IBM COS S3
43
44 · Koofr
45
46 · Memset Memstore
47
48 · Mega
49
50 · Microsoft Azure Blob Storage
51
52 · Microsoft OneDrive
53
54 · Minio
55
56 · Nextcloud
57
58 · OVH
59
60 · OpenDrive
61
62 · Openstack Swift
63
64 · Oracle Cloud Storage
65
66 · ownCloud
67
68 · pCloud
69
70 · put.io
71
72 · QingStor
73
74 · Rackspace Cloud Files
75
76 · Scaleway
77
78 · SFTP
79
80 · Wasabi
81
82 · WebDAV
83
84 · Yandex Disk
85
86 · The local filesystem
87
88 Features
89
90 · MD5/SHA1 hashes checked at all times for file integrity
91
92 · Timestamps preserved on files
93
94 · Partial syncs supported on a whole file basis
95
96 · Copy (https://rclone.org/commands/rclone_copy/) mode to just copy
97 new/changed files
98
99 · Sync (https://rclone.org/commands/rclone_sync/) (one way) mode to
100 make a directory identical
101
102 · Check (https://rclone.org/commands/rclone_check/) mode to check for
103 file hash equality
104
105 · Can sync to and from network, eg two different cloud accounts
106
107 · (Encryption (https://rclone.org/crypt/)) backend
108
109 · (Cache (https://rclone.org/cache/)) backend
110
111 · (Union (https://rclone.org/union/)) backend
112
113 · Optional FUSE mount (rclone mount (https://rclone.org/com‐
114 mands/rclone_mount/))
115
116 Links
117
118 · Home page (https://rclone.org/)
119
120 · GitHub project page for source and bug tracker
121 (https://github.com/ncw/rclone)
122
123 · Rclone Forum (https://forum.rclone.org)
124
125 · Downloads (https://rclone.org/downloads/)
126
128 Rclone is a Go program and comes as a single binary file.
129
130 Quickstart
131 · Download (https://rclone.org/downloads/) the relevant binary.
132
133 · Extract the rclone or rclone.exe binary from the archive
134
135 · Run rclone config to setup. See rclone config docs
136 (https://rclone.org/docs/) for more details.
137
138 See below for some expanded Linux / macOS instructions.
139
140 See the Usage section (https://rclone.org/docs/) of the docs for how to
141 use rclone, or run rclone -h.
142
143 Script installation
144 To install rclone on Linux/macOS/BSD systems, run:
145
146 curl https://rclone.org/install.sh | sudo bash
147
148 For beta installation, run:
149
150 curl https://rclone.org/install.sh | sudo bash -s beta
151
152 Note that this script checks the version of rclone installed first and
153 won't re-download if not needed.
154
155 Linux installation from precompiled binary
156 Fetch and unpack
157
158 curl -O https://downloads.rclone.org/rclone-current-linux-amd64.zip
159 unzip rclone-current-linux-amd64.zip
160 cd rclone-*-linux-amd64
161
162 Copy binary file
163
164 sudo cp rclone /usr/bin/
165 sudo chown root:root /usr/bin/rclone
166 sudo chmod 755 /usr/bin/rclone
167
168 Install manpage
169
170 sudo mkdir -p /usr/local/share/man/man1
171 sudo cp rclone.1 /usr/local/share/man/man1/
172 sudo mandb
173
174 Run rclone config to setup. See rclone config docs
175 (https://rclone.org/docs/) for more details.
176
177 rclone config
178
179 macOS installation from precompiled binary
180 Download the latest version of rclone.
181
182 cd && curl -O https://downloads.rclone.org/rclone-current-osx-amd64.zip
183
184 Unzip the download and cd to the extracted folder.
185
186 unzip -a rclone-current-osx-amd64.zip && cd rclone-*-osx-amd64
187
188 Move rclone to your $PATH. You will be prompted for your password.
189
190 sudo mkdir -p /usr/local/bin
191 sudo mv rclone /usr/local/bin/
192
193 (the mkdir command is safe to run, even if the directory already ex‐
194 ists).
195
196 Remove the leftover files.
197
198 cd .. && rm -rf rclone-*-osx-amd64 rclone-current-osx-amd64.zip
199
200 Run rclone config to setup. See rclone config docs
201 (https://rclone.org/docs/) for more details.
202
203 rclone config
204
205 Install from source
206 Make sure you have at least Go (https://golang.org/) 1.7 installed.
207 Download go (https://golang.org/dl/) if necessary. The latest release
208 is recommended. Then
209
210 git clone https://github.com/ncw/rclone.git
211 cd rclone
212 go build
213 ./rclone version
214
215 You can also build and install rclone in the GOPATH
216 (https://github.com/golang/go/wiki/GOPATH) (which defaults to ~/go)
217 with:
218
219 go get -u -v github.com/ncw/rclone
220
221 and this will build the binary in $GOPATH/bin (~/go/bin/rclone by de‐
222 fault) after downloading the source to
223 $GOPATH/src/github.com/ncw/rclone (~/go/src/github.com/ncw/rclone by
224 default).
225
226 Installation with Ansible
227 This can be done with Stefan Weichinger's ansible role
228 (https://github.com/stefangweichinger/ansible-rclone).
229
230 Instructions
231
232 1. git clone https://github.com/stefangweichinger/ansible-rclone.git
233 into your local roles-directory
234
235 2. add the role to the hosts you want rclone installed to:
236
237 - hosts: rclone-hosts
238 roles:
239 - rclone
240
241 Configure
242 First, you'll need to configure rclone. As the object storage systems
243 have quite complicated authentication these are kept in a config file.
244 (See the --config entry for how to find the config file and choose its
245 location.)
246
247 The easiest way to make the config is to run rclone with the config op‐
248 tion:
249
250 rclone config
251
252 See the following for detailed instructions for
253
254 · Alias (https://rclone.org/alias/)
255
256 · Amazon Drive (https://rclone.org/amazonclouddrive/)
257
258 · Amazon S3 (https://rclone.org/s3/)
259
260 · Backblaze B2 (https://rclone.org/b2/)
261
262 · Box (https://rclone.org/box/)
263
264 · Cache (https://rclone.org/cache/)
265
266 · Crypt (https://rclone.org/crypt/) - to encrypt other remotes
267
268 · DigitalOcean Spaces (/s3/#digitalocean-spaces)
269
270 · Dropbox (https://rclone.org/dropbox/)
271
272 · FTP (https://rclone.org/ftp/)
273
274 · Google Cloud Storage (https://rclone.org/googlecloudstorage/)
275
276 · Google Drive (https://rclone.org/drive/)
277
278 · HTTP (https://rclone.org/http/)
279
280 · Hubic (https://rclone.org/hubic/)
281
282 · Jottacloud (https://rclone.org/jottacloud/)
283
284 · Koofr (https://rclone.org/koofr/)
285
286 · Mega (https://rclone.org/mega/)
287
288 · Microsoft Azure Blob Storage (https://rclone.org/azureblob/)
289
290 · Microsoft OneDrive (https://rclone.org/onedrive/)
291
292 · Openstack Swift / Rackspace Cloudfiles / Memset Memstore
293 (https://rclone.org/swift/)
294
295 · OpenDrive (https://rclone.org/opendrive/)
296
297 · Pcloud (https://rclone.org/pcloud/)
298
299 · QingStor (https://rclone.org/qingstor/)
300
301 · SFTP (https://rclone.org/sftp/)
302
303 · Union (https://rclone.org/union/)
304
305 · WebDAV (https://rclone.org/webdav/)
306
307 · Yandex Disk (https://rclone.org/yandex/)
308
309 · The local filesystem (https://rclone.org/local/)
310
311 Usage
312 Rclone syncs a directory tree from one storage system to another.
313
314 Its syntax is like this
315
316 Syntax: [options] subcommand <parameters> <parameters...>
317
318 Source and destination paths are specified by the name you gave the
319 storage system in the config file then the sub path, eg “drive:myfold‐
320 er” to look at “myfolder” in Google drive.
321
322 You can define as many storage paths as you like in the config file.
323
324 Subcommands
325 rclone uses a system of subcommands. For example
326
327 rclone ls remote:path # lists a remote
328 rclone copy /local/path remote:path # copies /local/path to the remote
329 rclone sync /local/path remote:path # syncs /local/path to the remote
330
331 rclone config
332 Enter an interactive configuration session.
333
334 Synopsis
335 Enter an interactive configuration session where you can setup new re‐
336 motes and manage existing ones. You may also set or remove a password
337 to protect your configuration.
338
339 rclone config [flags]
340
341 Options
342 -h, --help help for config
343
344 rclone copy
345 Copy files from source to dest, skipping already copied
346
347 Synopsis
348 Copy the source to the destination. Doesn't transfer unchanged files,
349 testing by size and modification time or MD5SUM. Doesn't delete files
350 from the destination.
351
352 Note that it is always the contents of the directory that is synced,
353 not the directory so when source:path is a directory, it's the contents
354 of source:path that are copied, not the directory name and contents.
355
356 If dest:path doesn't exist, it is created and the source:path contents
357 go there.
358
359 For example
360
361 rclone copy source:sourcepath dest:destpath
362
363 Let's say there are two files in sourcepath
364
365 sourcepath/one.txt
366 sourcepath/two.txt
367
368 This copies them to
369
370 destpath/one.txt
371 destpath/two.txt
372
373 Not to
374
375 destpath/sourcepath/one.txt
376 destpath/sourcepath/two.txt
377
378 If you are familiar with rsync, rclone always works as if you had writ‐
379 ten a trailing / - meaning “copy the contents of this directory”. This
380 applies to all commands and whether you are talking about the source or
381 destination.
382
383 See the –no-traverse (/docs/#no-traverse) option for controlling
384 whether rclone lists the destination directory or not. Supplying this
385 option when copying a small number of files into a large destination
386 can speed transfers up greatly.
387
388 For example, if you have many files in /path/to/src but only a few of
389 them change every day, you can to copy all the files which have changed
390 recently very efficiently like this:
391
392 rclone copy --max-age 24h --no-traverse /path/to/src remote:
393
394 Note: Use the -P/--progress flag to view real-time transfer statistics
395
396 rclone copy source:path dest:path [flags]
397
398 Options
399 --create-empty-src-dirs Create empty source dirs on destination after copy
400 -h, --help help for copy
401
402 rclone sync
403 Make source and dest identical, modifying destination only.
404
405 Synopsis
406 Sync the source to the destination, changing the destination only.
407 Doesn't transfer unchanged files, testing by size and modification time
408 or MD5SUM. Destination is updated to match source, including deleting
409 files if necessary.
410
411 Important: Since this can cause data loss, test first with the
412 --dry-run flag to see exactly what would be copied and deleted.
413
414 Note that files in the destination won't be deleted if there were any
415 errors at any point.
416
417 It is always the contents of the directory that is synced, not the di‐
418 rectory so when source:path is a directory, it's the contents of
419 source:path that are copied, not the directory name and contents. See
420 extended explanation in the copy command above if unsure.
421
422 If dest:path doesn't exist, it is created and the source:path contents
423 go there.
424
425 Note: Use the -P/--progress flag to view real-time transfer statistics
426
427 rclone sync source:path dest:path [flags]
428
429 Options
430 --create-empty-src-dirs Create empty source dirs on destination after sync
431 -h, --help help for sync
432
433 rclone move
434 Move files from source to dest.
435
436 Synopsis
437 Moves the contents of the source directory to the destination directo‐
438 ry. Rclone will error if the source and destination overlap and the
439 remote does not support a server side directory move operation.
440
441 If no filters are in use and if possible this will server side move
442 source:path into dest:path. After this source:path will no longer
443 longer exist.
444
445 Otherwise for each file in source:path selected by the filters (if any)
446 this will move it into dest:path. If possible a server side move will
447 be used, otherwise it will copy it (server side if possible) into
448 dest:path then delete the original (if no errors on copy) in
449 source:path.
450
451 If you want to delete empty source directories after move, use the
452 –delete-empty-src-dirs flag.
453
454 See the –no-traverse (/docs/#no-traverse) option for controlling
455 whether rclone lists the destination directory or not. Supplying this
456 option when moving a small number of files into a large destination can
457 speed transfers up greatly.
458
459 Important: Since this can cause data loss, test first with the –dry-run
460 flag.
461
462 Note: Use the -P/--progress flag to view real-time transfer statistics.
463
464 rclone move source:path dest:path [flags]
465
466 Options
467 --create-empty-src-dirs Create empty source dirs on destination after move
468 --delete-empty-src-dirs Delete empty source dirs after move
469 -h, --help help for move
470
471 rclone delete
472 Remove the contents of path.
473
474 Synopsis
475 Remove the files in path. Unlike purge it obeys include/exclude fil‐
476 ters so can be used to selectively delete files.
477
478 rclone delete only deletes objects but leaves the directory structure
479 alone. If you want to delete a directory and all of its contents use
480 rclone purge
481
482 Eg delete all files bigger than 100MBytes
483
484 Check what would be deleted first (use either)
485
486 rclone --min-size 100M lsl remote:path
487 rclone --dry-run --min-size 100M delete remote:path
488
489 Then delete
490
491 rclone --min-size 100M delete remote:path
492
493 That reads “delete everything with a minimum size of 100 MB”, hence
494 delete all files bigger than 100MBytes.
495
496 rclone delete remote:path [flags]
497
498 Options
499 -h, --help help for delete
500
501 rclone purge
502 Remove the path and all of its contents.
503
504 Synopsis
505 Remove the path and all of its contents. Note that this does not obey
506 include/exclude filters - everything will be removed. Use delete if
507 you want to selectively delete files.
508
509 rclone purge remote:path [flags]
510
511 Options
512 -h, --help help for purge
513
514 rclone mkdir
515 Make the path if it doesn't already exist.
516
517 Synopsis
518 Make the path if it doesn't already exist.
519
520 rclone mkdir remote:path [flags]
521
522 Options
523 -h, --help help for mkdir
524
525 rclone rmdir
526 Remove the path if empty.
527
528 Synopsis
529 Remove the path. Note that you can't remove a path with objects in it,
530 use purge for that.
531
532 rclone rmdir remote:path [flags]
533
534 Options
535 -h, --help help for rmdir
536
537 rclone check
538 Checks the files in the source and destination match.
539
540 Synopsis
541 Checks the files in the source and destination match. It compares
542 sizes and hashes (MD5 or SHA1) and logs a report of files which don't
543 match. It doesn't alter the source or destination.
544
545 If you supply the –size-only flag, it will only compare the sizes not
546 the hashes as well. Use this for a quick check.
547
548 If you supply the –download flag, it will download the data from both
549 remotes and check them against each other on the fly. This can be use‐
550 ful for remotes that don't support hashes or if you really want to
551 check all the data.
552
553 If you supply the –one-way flag, it will only check that files in
554 source match the files in destination, not the other way around. Mean‐
555 ing extra files in destination that are not in the source will not
556 trigger an error.
557
558 rclone check source:path dest:path [flags]
559
560 Options
561 --download Check by downloading rather than with hash.
562 -h, --help help for check
563 --one-way Check one way only, source files must exist on remote
564
565 rclone ls
566 List the objects in the path with size and path.
567
568 Synopsis
569 Lists the objects in the source path to standard output in a human
570 readable format with size and path. Recurses by default.
571
572 Eg
573
574 $ rclone ls swift:bucket
575 60295 bevajer5jef
576 90613 canole
577 94467 diwogej7
578 37600 fubuwic
579
580 Any of the filtering options can be applied to this commmand.
581
582 There are several related list commands
583
584 · ls to list size and path of objects only
585
586 · lsl to list modification time, size and path of objects only
587
588 · lsd to list directories only
589
590 · lsf to list objects and directories in easy to parse format
591
592 · lsjson to list objects and directories in JSON format
593
594 ls,lsl,lsd are designed to be human readable. lsf is designed to be
595 human and machine readable. lsjson is designed to be machine readable.
596
597 Note that ls and lsl recurse by default - use “–max-depth 1” to stop
598 the recursion.
599
600 The other list commands lsd,lsf,lsjson do not recurse by default - use
601 “-R” to make them recurse.
602
603 Listing a non existent directory will produce an error except for re‐
604 motes which can't have empty directories (eg s3, swift, gcs, etc - the
605 bucket based remotes).
606
607 rclone ls remote:path [flags]
608
609 Options
610 -h, --help help for ls
611
612 rclone lsd
613 List all directories/containers/buckets in the path.
614
615 Synopsis
616 Lists the directories in the source path to standard output. Does not
617 recurse by default. Use the -R flag to recurse.
618
619 This command lists the total size of the directory (if known, -1 if
620 not), the modification time (if known, the current time if not), the
621 number of objects in the directory (if known, -1 if not) and the name
622 of the directory, Eg
623
624 $ rclone lsd swift:
625 494000 2018-04-26 08:43:20 10000 10000files
626 65 2018-04-26 08:43:20 1 1File
627
628 Or
629
630 $ rclone lsd drive:test
631 -1 2016-10-17 17:41:53 -1 1000files
632 -1 2017-01-03 14:40:54 -1 2500files
633 -1 2017-07-08 14:39:28 -1 4000files
634
635 If you just want the directory names use “rclone lsf –dirs-only”.
636
637 Any of the filtering options can be applied to this commmand.
638
639 There are several related list commands
640
641 · ls to list size and path of objects only
642
643 · lsl to list modification time, size and path of objects only
644
645 · lsd to list directories only
646
647 · lsf to list objects and directories in easy to parse format
648
649 · lsjson to list objects and directories in JSON format
650
651 ls,lsl,lsd are designed to be human readable. lsf is designed to be
652 human and machine readable. lsjson is designed to be machine readable.
653
654 Note that ls and lsl recurse by default - use “–max-depth 1” to stop
655 the recursion.
656
657 The other list commands lsd,lsf,lsjson do not recurse by default - use
658 “-R” to make them recurse.
659
660 Listing a non existent directory will produce an error except for re‐
661 motes which can't have empty directories (eg s3, swift, gcs, etc - the
662 bucket based remotes).
663
664 rclone lsd remote:path [flags]
665
666 Options
667 -h, --help help for lsd
668 -R, --recursive Recurse into the listing.
669
670 rclone lsl
671 List the objects in path with modification time, size and path.
672
673 Synopsis
674 Lists the objects in the source path to standard output in a human
675 readable format with modification time, size and path. Recurses by de‐
676 fault.
677
678 Eg
679
680 $ rclone lsl swift:bucket
681 60295 2016-06-25 18:55:41.062626927 bevajer5jef
682 90613 2016-06-25 18:55:43.302607074 canole
683 94467 2016-06-25 18:55:43.046609333 diwogej7
684 37600 2016-06-25 18:55:40.814629136 fubuwic
685
686 Any of the filtering options can be applied to this commmand.
687
688 There are several related list commands
689
690 · ls to list size and path of objects only
691
692 · lsl to list modification time, size and path of objects only
693
694 · lsd to list directories only
695
696 · lsf to list objects and directories in easy to parse format
697
698 · lsjson to list objects and directories in JSON format
699
700 ls,lsl,lsd are designed to be human readable. lsf is designed to be
701 human and machine readable. lsjson is designed to be machine readable.
702
703 Note that ls and lsl recurse by default - use “–max-depth 1” to stop
704 the recursion.
705
706 The other list commands lsd,lsf,lsjson do not recurse by default - use
707 “-R” to make them recurse.
708
709 Listing a non existent directory will produce an error except for re‐
710 motes which can't have empty directories (eg s3, swift, gcs, etc - the
711 bucket based remotes).
712
713 rclone lsl remote:path [flags]
714
715 Options
716 -h, --help help for lsl
717
718 rclone md5sum
719 Produces an md5sum file for all the objects in the path.
720
721 Synopsis
722 Produces an md5sum file for all the objects in the path. This is in
723 the same format as the standard md5sum tool produces.
724
725 rclone md5sum remote:path [flags]
726
727 Options
728 -h, --help help for md5sum
729
730 rclone sha1sum
731 Produces an sha1sum file for all the objects in the path.
732
733 Synopsis
734 Produces an sha1sum file for all the objects in the path. This is in
735 the same format as the standard sha1sum tool produces.
736
737 rclone sha1sum remote:path [flags]
738
739 Options
740 -h, --help help for sha1sum
741
742 rclone size
743 Prints the total size and number of objects in remote:path.
744
745 Synopsis
746 Prints the total size and number of objects in remote:path.
747
748 rclone size remote:path [flags]
749
750 Options
751 -h, --help help for size
752 --json format output as JSON
753
754 rclone version
755 Show the version number.
756
757 Synopsis
758 Show the version number, the go version and the architecture.
759
760 Eg
761
762 $ rclone version
763 rclone v1.41
764 - os/arch: linux/amd64
765 - go version: go1.10
766
767 If you supply the –check flag, then it will do an online check to com‐
768 pare your version with the latest release and the latest beta.
769
770 $ rclone version --check
771 yours: 1.42.0.6
772 latest: 1.42 (released 2018-06-16)
773 beta: 1.42.0.5 (released 2018-06-17)
774
775 Or
776
777 $ rclone version --check
778 yours: 1.41
779 latest: 1.42 (released 2018-06-16)
780 upgrade: https://downloads.rclone.org/v1.42
781 beta: 1.42.0.5 (released 2018-06-17)
782 upgrade: https://beta.rclone.org/v1.42-005-g56e1e820
783
784 rclone version [flags]
785
786 Options
787 --check Check for new version.
788 -h, --help help for version
789
790 rclone cleanup
791 Clean up the remote if possible
792
793 Synopsis
794 Clean up the remote if possible. Empty the trash or delete old file
795 versions. Not supported by all remotes.
796
797 rclone cleanup remote:path [flags]
798
799 Options
800 -h, --help help for cleanup
801
802 rclone dedupe
803 Interactively find duplicate files and delete/rename them.
804
805 Synopsis
806 By default dedupe interactively finds duplicate files and offers to
807 delete all but one or rename them to be different. Only useful with
808 Google Drive which can have duplicate file names.
809
810 In the first pass it will merge directories with the same name. It
811 will do this iteratively until all the identical directories have been
812 merged.
813
814 The dedupe command will delete all but one of any identical (same
815 md5sum) files it finds without confirmation. This means that for most
816 duplicated files the dedupe command will not be interactive. You can
817 use --dry-run to see what would happen without doing anything.
818
819 Here is an example run.
820
821 Before - with duplicates
822
823 $ rclone lsl drive:dupes
824 6048320 2016-03-05 16:23:16.798000000 one.txt
825 6048320 2016-03-05 16:23:11.775000000 one.txt
826 564374 2016-03-05 16:23:06.731000000 one.txt
827 6048320 2016-03-05 16:18:26.092000000 one.txt
828 6048320 2016-03-05 16:22:46.185000000 two.txt
829 1744073 2016-03-05 16:22:38.104000000 two.txt
830 564374 2016-03-05 16:22:52.118000000 two.txt
831
832 Now the dedupe session
833
834 $ rclone dedupe drive:dupes
835 2016/03/05 16:24:37 Google drive root 'dupes': Looking for duplicates using interactive mode.
836 one.txt: Found 4 duplicates - deleting identical copies
837 one.txt: Deleting 2/3 identical duplicates (md5sum "1eedaa9fe86fd4b8632e2ac549403b36")
838 one.txt: 2 duplicates remain
839 1: 6048320 bytes, 2016-03-05 16:23:16.798000000, md5sum 1eedaa9fe86fd4b8632e2ac549403b36
840 2: 564374 bytes, 2016-03-05 16:23:06.731000000, md5sum 7594e7dc9fc28f727c42ee3e0749de81
841 s) Skip and do nothing
842 k) Keep just one (choose which in next step)
843 r) Rename all to be different (by changing file.jpg to file-1.jpg)
844 s/k/r> k
845 Enter the number of the file to keep> 1
846 one.txt: Deleted 1 extra copies
847 two.txt: Found 3 duplicates - deleting identical copies
848 two.txt: 3 duplicates remain
849 1: 564374 bytes, 2016-03-05 16:22:52.118000000, md5sum 7594e7dc9fc28f727c42ee3e0749de81
850 2: 6048320 bytes, 2016-03-05 16:22:46.185000000, md5sum 1eedaa9fe86fd4b8632e2ac549403b36
851 3: 1744073 bytes, 2016-03-05 16:22:38.104000000, md5sum 851957f7fb6f0bc4ce76be966d336802
852 s) Skip and do nothing
853 k) Keep just one (choose which in next step)
854 r) Rename all to be different (by changing file.jpg to file-1.jpg)
855 s/k/r> r
856 two-1.txt: renamed from: two.txt
857 two-2.txt: renamed from: two.txt
858 two-3.txt: renamed from: two.txt
859
860 The result being
861
862 $ rclone lsl drive:dupes
863 6048320 2016-03-05 16:23:16.798000000 one.txt
864 564374 2016-03-05 16:22:52.118000000 two-1.txt
865 6048320 2016-03-05 16:22:46.185000000 two-2.txt
866 1744073 2016-03-05 16:22:38.104000000 two-3.txt
867
868 Dedupe can be run non interactively using the --dedupe-mode flag or by
869 using an extra parameter with the same value
870
871 · --dedupe-mode interactive - interactive as above.
872
873 · --dedupe-mode skip - removes identical files then skips anything
874 left.
875
876 · --dedupe-mode first - removes identical files then keeps the first
877 one.
878
879 · --dedupe-mode newest - removes identical files then keeps the newest
880 one.
881
882 · --dedupe-mode oldest - removes identical files then keeps the oldest
883 one.
884
885 · --dedupe-mode largest - removes identical files then keeps the
886 largest one.
887
888 · --dedupe-mode rename - removes identical files then renames the rest
889 to be different.
890
891 For example to rename all the identically named photos in your Google
892 Photos directory, do
893
894 rclone dedupe --dedupe-mode rename "drive:Google Photos"
895
896 Or
897
898 rclone dedupe rename "drive:Google Photos"
899
900 rclone dedupe [mode] remote:path [flags]
901
902 Options
903 --dedupe-mode string Dedupe mode interactive|skip|first|newest|oldest|rename. (default "interactive")
904 -h, --help help for dedupe
905
906 rclone about
907 Get quota information from the remote.
908
909 Synopsis
910 Get quota information from the remote, like bytes used/free/quota and
911 bytes used in the trash. Not supported by all remotes.
912
913 This will print to stdout something like this:
914
915 Total: 17G
916 Used: 7.444G
917 Free: 1.315G
918 Trashed: 100.000M
919 Other: 8.241G
920
921 Where the fields are:
922
923 · Total: total size available.
924
925 · Used: total size used
926
927 · Free: total amount this user could upload.
928
929 · Trashed: total amount in the trash
930
931 · Other: total amount in other storage (eg Gmail, Google Photos)
932
933 · Objects: total number of objects in the storage
934
935 Note that not all the backends provide all the fields - they will be
936 missing if they are not known for that backend. Where it is known that
937 the value is unlimited the value will also be omitted.
938
939 Use the –full flag to see the numbers written out in full, eg
940
941 Total: 18253611008
942 Used: 7993453766
943 Free: 1411001220
944 Trashed: 104857602
945 Other: 8849156022
946
947 Use the –json flag for a computer readable output, eg
948
949 {
950 "total": 18253611008,
951 "used": 7993453766,
952 "trashed": 104857602,
953 "other": 8849156022,
954 "free": 1411001220
955 }
956
957 rclone about remote: [flags]
958
959 Options
960 --full Full numbers instead of SI units
961 -h, --help help for about
962 --json Format output as JSON
963
964 rclone authorize
965 Remote authorization.
966
967 Synopsis
968 Remote authorization. Used to authorize a remote or headless rclone
969 from a machine with a browser - use as instructed by rclone config.
970
971 rclone authorize [flags]
972
973 Options
974 -h, --help help for authorize
975
976 rclone cachestats
977 Print cache stats for a remote
978
979 Synopsis
980 Print cache stats for a remote in JSON format
981
982 rclone cachestats source: [flags]
983
984 Options
985 -h, --help help for cachestats
986
987 rclone cat
988 Concatenates any files and sends them to stdout.
989
990 Synopsis
991 rclone cat sends any files to standard output.
992
993 You can use it like this to output a single file
994
995 rclone cat remote:path/to/file
996
997 Or like this to output any file in dir or subdirectories.
998
999 rclone cat remote:path/to/dir
1000
1001 Or like this to output any .txt files in dir or subdirectories.
1002
1003 rclone --include "*.txt" cat remote:path/to/dir
1004
1005 Use the –head flag to print characters only at the start, –tail for the
1006 end and –offset and –count to print a section in the middle. Note that
1007 if offset is negative it will count from the end, so –offset -1 –count
1008 1 is equivalent to –tail 1.
1009
1010 rclone cat remote:path [flags]
1011
1012 Options
1013 --count int Only print N characters. (default -1)
1014 --discard Discard the output instead of printing.
1015 --head int Only print the first N characters.
1016 -h, --help help for cat
1017 --offset int Start printing at offset N (or from end if -ve).
1018 --tail int Only print the last N characters.
1019
1020 rclone config create
1021 Create a new remote with name, type and options.
1022
1023 Synopsis
1024 Create a new remote of with and options. The options should be passed
1025 in in pairs of .
1026
1027 For example to make a swift remote of name myremote using auto config
1028 you would do:
1029
1030 rclone config create myremote swift env_auth true
1031
1032 Note that if the config process would normally ask a question the de‐
1033 fault is taken. Each time that happens rclone will print a message
1034 saying how to affect the value taken.
1035
1036 So for example if you wanted to configure a Google Drive remote but us‐
1037 ing remote authorization you would do this:
1038
1039 rclone config create mydrive drive config_is_local false
1040
1041 rclone config create <name> <type> [<key> <value>]* [flags]
1042
1043 Options
1044 -h, --help help for create
1045
1046 rclone config delete
1047 Delete an existing remote .
1048
1049 Synopsis
1050 Delete an existing remote .
1051
1052 rclone config delete <name> [flags]
1053
1054 Options
1055 -h, --help help for delete
1056
1057 rclone config dump
1058 Dump the config file as JSON.
1059
1060 Synopsis
1061 Dump the config file as JSON.
1062
1063 rclone config dump [flags]
1064
1065 Options
1066 -h, --help help for dump
1067
1068 rclone config edit
1069 Enter an interactive configuration session.
1070
1071 Synopsis
1072 Enter an interactive configuration session where you can setup new re‐
1073 motes and manage existing ones. You may also set or remove a password
1074 to protect your configuration.
1075
1076 rclone config edit [flags]
1077
1078 Options
1079 -h, --help help for edit
1080
1081 rclone config file
1082 Show path of configuration file in use.
1083
1084 Synopsis
1085 Show path of configuration file in use.
1086
1087 rclone config file [flags]
1088
1089 Options
1090 -h, --help help for file
1091
1092 rclone config password
1093 Update password in an existing remote.
1094
1095 Synopsis
1096 Update an existing remote's password. The password should be passed in
1097 in pairs of .
1098
1099 For example to set password of a remote of name myremote you would do:
1100
1101 rclone config password myremote fieldname mypassword
1102
1103 rclone config password <name> [<key> <value>]+ [flags]
1104
1105 Options
1106 -h, --help help for password
1107
1108 rclone config providers
1109 List in JSON format all the providers and options.
1110
1111 Synopsis
1112 List in JSON format all the providers and options.
1113
1114 rclone config providers [flags]
1115
1116 Options
1117 -h, --help help for providers
1118
1119 rclone config show
1120 Print (decrypted) config file, or the config for a single remote.
1121
1122 Synopsis
1123 Print (decrypted) config file, or the config for a single remote.
1124
1125 rclone config show [<remote>] [flags]
1126
1127 Options
1128 -h, --help help for show
1129
1130 rclone config update
1131 Update options in an existing remote.
1132
1133 Synopsis
1134 Update an existing remote's options. The options should be passed in
1135 in pairs of .
1136
1137 For example to update the env_auth field of a remote of name myremote
1138 you would do:
1139
1140 rclone config update myremote swift env_auth true
1141
1142 If the remote uses oauth the token will be updated, if you don't re‐
1143 quire this add an extra parameter thus:
1144
1145 rclone config update myremote swift env_auth true config_refresh_token false
1146
1147 rclone config update <name> [<key> <value>]+ [flags]
1148
1149 Options
1150 -h, --help help for update
1151
1152 rclone copyto
1153 Copy files from source to dest, skipping already copied
1154
1155 Synopsis
1156 If source:path is a file or directory then it copies it to a file or
1157 directory named dest:path.
1158
1159 This can be used to upload single files to other than their current
1160 name. If the source is a directory then it acts exactly like the copy
1161 command.
1162
1163 So
1164
1165 rclone copyto src dst
1166
1167 where src and dst are rclone paths, either remote:path or /path/to/lo‐
1168 cal or C:.
1169
1170 This will:
1171
1172 if src is file
1173 copy it to dst, overwriting an existing file if it exists
1174 if src is directory
1175 copy it to dst, overwriting existing files if they exist
1176 see copy command for full details
1177
1178 This doesn't transfer unchanged files, testing by size and modification
1179 time or MD5SUM. It doesn't delete files from the destination.
1180
1181 Note: Use the -P/--progress flag to view real-time transfer statistics
1182
1183 rclone copyto source:path dest:path [flags]
1184
1185 Options
1186 -h, --help help for copyto
1187
1188 rclone copyurl
1189 Copy url content to dest.
1190
1191 Synopsis
1192 Download urls content and copy it to destination without saving it in
1193 tmp storage.
1194
1195 rclone copyurl https://example.com dest:path [flags]
1196
1197 Options
1198 -h, --help help for copyurl
1199
1200 rclone cryptcheck
1201 Cryptcheck checks the integrity of a crypted remote.
1202
1203 Synopsis
1204 rclone cryptcheck checks a remote against a crypted remote. This is
1205 the equivalent of running rclone check, but able to check the checksums
1206 of the crypted remote.
1207
1208 For it to work the underlying remote of the cryptedremote must support
1209 some kind of checksum.
1210
1211 It works by reading the nonce from each file on the cryptedremote: and
1212 using that to encrypt each file on the remote:. It then checks the
1213 checksum of the underlying file on the cryptedremote: against the
1214 checksum of the file it has just encrypted.
1215
1216 Use it like this
1217
1218 rclone cryptcheck /path/to/files encryptedremote:path
1219
1220 You can use it like this also, but that will involve downloading all
1221 the files in remote:path.
1222
1223 rclone cryptcheck remote:path encryptedremote:path
1224
1225 After it has run it will log the status of the encryptedremote:.
1226
1227 If you supply the –one-way flag, it will only check that files in
1228 source match the files in destination, not the other way around. Mean‐
1229 ing extra files in destination that are not in the source will not
1230 trigger an error.
1231
1232 rclone cryptcheck remote:path cryptedremote:path [flags]
1233
1234 Options
1235 -h, --help help for cryptcheck
1236 --one-way Check one way only, source files must exist on destination
1237
1238 rclone cryptdecode
1239 Cryptdecode returns unencrypted file names.
1240
1241 Synopsis
1242 rclone cryptdecode returns unencrypted file names when provided with a
1243 list of encrypted file names. List limit is 10 items.
1244
1245 If you supply the –reverse flag, it will return encrypted file names.
1246
1247 use it like this
1248
1249 rclone cryptdecode encryptedremote: encryptedfilename1 encryptedfilename2
1250
1251 rclone cryptdecode --reverse encryptedremote: filename1 filename2
1252
1253 rclone cryptdecode encryptedremote: encryptedfilename [flags]
1254
1255 Options
1256 -h, --help help for cryptdecode
1257 --reverse Reverse cryptdecode, encrypts filenames
1258
1259 rclone dbhashsum
1260 Produces a Dropbox hash file for all the objects in the path.
1261
1262 Synopsis
1263 Produces a Dropbox hash file for all the objects in the path. The
1264 hashes are calculated according to Dropbox content hash rules
1265 (https://www.dropbox.com/developers/reference/content-hash). The out‐
1266 put is in the same format as md5sum and sha1sum.
1267
1268 rclone dbhashsum remote:path [flags]
1269
1270 Options
1271 -h, --help help for dbhashsum
1272
1273 rclone deletefile
1274 Remove a single file from remote.
1275
1276 Synopsis
1277 Remove a single file from remote. Unlike delete it cannot be used to
1278 remove a directory and it doesn't obey include/exclude filters - if the
1279 specified file exists, it will always be removed.
1280
1281 rclone deletefile remote:path [flags]
1282
1283 Options
1284 -h, --help help for deletefile
1285
1286 rclone genautocomplete
1287 Output completion script for a given shell.
1288
1289 Synopsis
1290 Generates a shell completion script for rclone. Run with –help to list
1291 the supported shells.
1292
1293 Options
1294 -h, --help help for genautocomplete
1295
1296 rclone genautocomplete bash
1297 Output bash completion script for rclone.
1298
1299 Synopsis
1300 Generates a bash shell autocompletion script for rclone.
1301
1302 This writes to /etc/bash_completion.d/rclone by default so will proba‐
1303 bly need to be run with sudo or as root, eg
1304
1305 sudo rclone genautocomplete bash
1306
1307 Logout and login again to use the autocompletion scripts, or source
1308 them directly
1309
1310 . /etc/bash_completion
1311
1312 If you supply a command line argument the script will be written there.
1313
1314 rclone genautocomplete bash [output_file] [flags]
1315
1316 Options
1317 -h, --help help for bash
1318
1319 rclone genautocomplete zsh
1320 Output zsh completion script for rclone.
1321
1322 Synopsis
1323 Generates a zsh autocompletion script for rclone.
1324
1325 This writes to /usr/share/zsh/vendor-completions/_rclone by default so
1326 will probably need to be run with sudo or as root, eg
1327
1328 sudo rclone genautocomplete zsh
1329
1330 Logout and login again to use the autocompletion scripts, or source
1331 them directly
1332
1333 autoload -U compinit && compinit
1334
1335 If you supply a command line argument the script will be written there.
1336
1337 rclone genautocomplete zsh [output_file] [flags]
1338
1339 Options
1340 -h, --help help for zsh
1341
1342 rclone gendocs
1343 Output markdown docs for rclone to the directory supplied.
1344
1345 Synopsis
1346 This produces markdown docs for the rclone commands to the directory
1347 supplied. These are in a format suitable for hugo to render into the
1348 rclone.org website.
1349
1350 rclone gendocs output_directory [flags]
1351
1352 Options
1353 -h, --help help for gendocs
1354
1355 rclone hashsum
1356 Produces an hashsum file for all the objects in the path.
1357
1358 Synopsis
1359 Produces a hash file for all the objects in the path using the hash
1360 named. The output is in the same format as the standard md5sum/sha1sum
1361 tool.
1362
1363 Run without a hash to see the list of supported hashes, eg
1364
1365 $ rclone hashsum
1366 Supported hashes are:
1367 * MD5
1368 * SHA-1
1369 * DropboxHash
1370 * QuickXorHash
1371
1372 Then
1373
1374 $ rclone hashsum MD5 remote:path
1375
1376 rclone hashsum <hash> remote:path [flags]
1377
1378 Options
1379 -h, --help help for hashsum
1380
1381 rclone link
1382 Generate public link to file/folder.
1383
1384 Synopsis
1385 rclone link will create or retrieve a public link to the given file or
1386 folder.
1387
1388 rclone link remote:path/to/file
1389 rclone link remote:path/to/folder/
1390
1391 If successful, the last line of the output will contain the link. Ex‐
1392 act capabilities depend on the remote, but the link will always be cre‐
1393 ated with the least constraints – e.g. no expiry, no password protec‐
1394 tion, accessible without account.
1395
1396 rclone link remote:path [flags]
1397
1398 Options
1399 -h, --help help for link
1400
1401 rclone listremotes
1402 List all the remotes in the config file.
1403
1404 Synopsis
1405 rclone listremotes lists all the available remotes from the config
1406 file.
1407
1408 When uses with the -l flag it lists the types too.
1409
1410 rclone listremotes [flags]
1411
1412 Options
1413 -h, --help help for listremotes
1414 --long Show the type as well as names.
1415
1416 rclone lsf
1417 List directories and objects in remote:path formatted for parsing
1418
1419 Synopsis
1420 List the contents of the source path (directories and objects) to stan‐
1421 dard output in a form which is easy to parse by scripts. By default
1422 this will just be the names of the objects and directories, one per
1423 line. The directories will have a / suffix.
1424
1425 Eg
1426
1427 $ rclone lsf swift:bucket
1428 bevajer5jef
1429 canole
1430 diwogej7
1431 ferejej3gux/
1432 fubuwic
1433
1434 Use the –format option to control what gets listed. By default this is
1435 just the path, but you can use these parameters to control the output:
1436
1437 p - path
1438 s - size
1439 t - modification time
1440 h - hash
1441 i - ID of object
1442 o - Original ID of underlying object
1443 m - MimeType of object if known
1444 e - encrypted name
1445
1446 So if you wanted the path, size and modification time, you would use
1447 –format “pst”, or maybe –format “tsp” to put the path last.
1448
1449 Eg
1450
1451 $ rclone lsf --format "tsp" swift:bucket
1452 2016-06-25 18:55:41;60295;bevajer5jef
1453 2016-06-25 18:55:43;90613;canole
1454 2016-06-25 18:55:43;94467;diwogej7
1455 2018-04-26 08:50:45;0;ferejej3gux/
1456 2016-06-25 18:55:40;37600;fubuwic
1457
1458 If you specify “h” in the format you will get the MD5 hash by default,
1459 use the “–hash” flag to change which hash you want. Note that this can
1460 be returned as an empty string if it isn't available on the object (and
1461 for directories), “ERROR” if there was an error reading it from the ob‐
1462 ject and “UNSUPPORTED” if that object does not support that hash type.
1463
1464 For example to emulate the md5sum command you can use
1465
1466 rclone lsf -R --hash MD5 --format hp --separator " " --files-only .
1467
1468 Eg
1469
1470 $ rclone lsf -R --hash MD5 --format hp --separator " " --files-only swift:bucket
1471 7908e352297f0f530b84a756f188baa3 bevajer5jef
1472 cd65ac234e6fea5925974a51cdd865cc canole
1473 03b5341b4f234b9d984d03ad076bae91 diwogej7
1474 8fd37c3810dd660778137ac3a66cc06d fubuwic
1475 99713e14a4c4ff553acaf1930fad985b gixacuh7ku
1476
1477 (Though “rclone md5sum .” is an easier way of typing this.)
1478
1479 By default the separator is “;” this can be changed with the –separator
1480 flag. Note that separators aren't escaped in the path so putting it
1481 last is a good strategy.
1482
1483 Eg
1484
1485 $ rclone lsf --separator "," --format "tshp" swift:bucket
1486 2016-06-25 18:55:41,60295,7908e352297f0f530b84a756f188baa3,bevajer5jef
1487 2016-06-25 18:55:43,90613,cd65ac234e6fea5925974a51cdd865cc,canole
1488 2016-06-25 18:55:43,94467,03b5341b4f234b9d984d03ad076bae91,diwogej7
1489 2018-04-26 08:52:53,0,,ferejej3gux/
1490 2016-06-25 18:55:40,37600,8fd37c3810dd660778137ac3a66cc06d,fubuwic
1491
1492 You can output in CSV standard format. This will escape things in " if
1493 they contain ,
1494
1495 Eg
1496
1497 $ rclone lsf --csv --files-only --format ps remote:path
1498 test.log,22355
1499 test.sh,449
1500 "this file contains a comma, in the file name.txt",6
1501
1502 Note that the –absolute parameter is useful for making lists of files
1503 to pass to an rclone copy with the –files-from flag.
1504
1505 For example to find all the files modified within one day and copy
1506 those only (without traversing the whole directory structure):
1507
1508 rclone lsf --absolute --files-only --max-age 1d /path/to/local > new_files
1509 rclone copy --files-from new_files /path/to/local remote:path
1510
1511 Any of the filtering options can be applied to this commmand.
1512
1513 There are several related list commands
1514
1515 · ls to list size and path of objects only
1516
1517 · lsl to list modification time, size and path of objects only
1518
1519 · lsd to list directories only
1520
1521 · lsf to list objects and directories in easy to parse format
1522
1523 · lsjson to list objects and directories in JSON format
1524
1525 ls,lsl,lsd are designed to be human readable. lsf is designed to be
1526 human and machine readable. lsjson is designed to be machine readable.
1527
1528 Note that ls and lsl recurse by default - use “–max-depth 1” to stop
1529 the recursion.
1530
1531 The other list commands lsd,lsf,lsjson do not recurse by default - use
1532 “-R” to make them recurse.
1533
1534 Listing a non existent directory will produce an error except for re‐
1535 motes which can't have empty directories (eg s3, swift, gcs, etc - the
1536 bucket based remotes).
1537
1538 rclone lsf remote:path [flags]
1539
1540 Options
1541 --absolute Put a leading / in front of path names.
1542 --csv Output in CSV format.
1543 -d, --dir-slash Append a slash to directory names. (default true)
1544 --dirs-only Only list directories.
1545 --files-only Only list files.
1546 -F, --format string Output format - see help for details (default "p")
1547 --hash h Use this hash when h is used in the format MD5|SHA-1|DropboxHash (default "MD5")
1548 -h, --help help for lsf
1549 -R, --recursive Recurse into the listing.
1550 -s, --separator string Separator for the items in the format. (default ";")
1551
1552 rclone lsjson
1553 List directories and objects in the path in JSON format.
1554
1555 Synopsis
1556 List directories and objects in the path in JSON format.
1557
1558 The output is an array of Items, where each Item looks like this
1559
1560 { “Hashes” : { “SHA-1” : “f572d396fae9206628714fb2ce00f72e94f2258f”,
1561 “MD5” : “b1946ac92492d2347c6235b4d2611184”, “DropboxHash” :
1562 “ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc” },
1563 “ID”: “y2djkhiujf83u33”, “OrigID”: “UYOJVTUW00Q1RzTDA”, “IsDir” :
1564 false, “MimeType” : “application/octet-stream”, “ModTime” :
1565 “2017-05-31T16:15:57.034468261+01:00”, “Name” : “file.txt”, “Encrypted”
1566 : “v0qpsdq8anpci8n929v3uu9338”, “Path” :
1567 “full/path/goes/here/file.txt”, “Size” : 6 }
1568
1569 If –hash is not specified the Hashes property won't be emitted.
1570
1571 If –no-modtime is specified then ModTime will be blank.
1572
1573 If –encrypted is not specified the Encrypted won't be emitted.
1574
1575 If –dirs-only is not specified files in addition to directories are re‐
1576 turned
1577
1578 If –files-only is not specified directories in addition to the files
1579 will be returned.
1580
1581 The Path field will only show folders below the remote path being list‐
1582 ed. If “remote:path” contains the file “subfolder/file.txt”, the Path
1583 for “file.txt” will be “subfolder/file.txt”, not “remote:path/subfold‐
1584 er/file.txt”. When used without –recursive the Path will always be the
1585 same as Name.
1586
1587 The time is in RFC3339 format with up to nanosecond precision. The
1588 number of decimal digits in the seconds will depend on the precision
1589 that the remote can hold the times, so if times are accurate to the
1590 nearest millisecond (eg Google Drive) then 3 digits will always be
1591 shown (“2017-05-31T16:15:57.034+01:00”) whereas if the times are accu‐
1592 rate to the nearest second (Dropbox, Box, WebDav etc) no digits will be
1593 shown (“2017-05-31T16:15:57+01:00”).
1594
1595 The whole output can be processed as a JSON blob, or alternatively it
1596 can be processed line by line as each item is written one to a line.
1597
1598 Any of the filtering options can be applied to this commmand.
1599
1600 There are several related list commands
1601
1602 · ls to list size and path of objects only
1603
1604 · lsl to list modification time, size and path of objects only
1605
1606 · lsd to list directories only
1607
1608 · lsf to list objects and directories in easy to parse format
1609
1610 · lsjson to list objects and directories in JSON format
1611
1612 ls,lsl,lsd are designed to be human readable. lsf is designed to be
1613 human and machine readable. lsjson is designed to be machine readable.
1614
1615 Note that ls and lsl recurse by default - use “–max-depth 1” to stop
1616 the recursion.
1617
1618 The other list commands lsd,lsf,lsjson do not recurse by default - use
1619 “-R” to make them recurse.
1620
1621 Listing a non existent directory will produce an error except for re‐
1622 motes which can't have empty directories (eg s3, swift, gcs, etc - the
1623 bucket based remotes).
1624
1625 rclone lsjson remote:path [flags]
1626
1627 Options
1628 --dirs-only Show only directories in the listing.
1629 -M, --encrypted Show the encrypted names.
1630 --files-only Show only files in the listing.
1631 --hash Include hashes in the output (may take longer).
1632 -h, --help help for lsjson
1633 --no-modtime Don't read the modification time (can speed things up).
1634 --original Show the ID of the underlying Object.
1635 -R, --recursive Recurse into the listing.
1636
1637 rclone mount
1638 Mount the remote as file system on a mountpoint.
1639
1640 Synopsis
1641 rclone mount allows Linux, FreeBSD, macOS and Windows to mount any of
1642 Rclone's cloud storage systems as a file system with FUSE.
1643
1644 First set up your remote using rclone config. Check it works with
1645 rclone ls etc.
1646
1647 Start the mount like this
1648
1649 rclone mount remote:path/to/files /path/to/local/mount
1650
1651 Or on Windows like this where X: is an unused drive letter
1652
1653 rclone mount remote:path/to/files X:
1654
1655 When the program ends, either via Ctrl+C or receiving a SIGINT or
1656 SIGTERM signal, the mount is automatically stopped.
1657
1658 The umount operation can fail, for example when the mountpoint is busy.
1659 When that happens, it is the user's responsibility to stop the mount
1660 manually with
1661
1662 # Linux
1663 fusermount -u /path/to/local/mount
1664 # OS X
1665 umount /path/to/local/mount
1666
1667 Installing on Windows
1668 To run rclone mount on Windows, you will need to download and install
1669 WinFsp (http://www.secfs.net/winfsp/).
1670
1671 WinFsp is an open source (https://github.com/billziss-gh/winfsp) Win‐
1672 dows File System Proxy which makes it easy to write user space file
1673 systems for Windows. It provides a FUSE emulation layer which rclone
1674 uses combination with cgofuse (https://github.com/billziss-gh/cgofuse).
1675 Both of these packages are by Bill Zissimopoulos who was very helpful
1676 during the implementation of rclone mount for Windows.
1677
1678 Windows caveats
1679 Note that drives created as Administrator are not visible by other ac‐
1680 counts (including the account that was elevated as Administrator). So
1681 if you start a Windows drive from an Administrative Command Prompt and
1682 then try to access the same drive from Explorer (which does not run as
1683 Administrator), you will not be able to see the new drive.
1684
1685 The easiest way around this is to start the drive from a normal command
1686 prompt. It is also possible to start a drive from the SYSTEM account
1687 (using the WinFsp.Launcher infrastructure (https://github.com/billziss-
1688 gh/winfsp/wiki/WinFsp-Service-Architecture)) which creates drives ac‐
1689 cessible for everyone on the system or alternatively using the nssm
1690 service manager (https://nssm.cc/usage).
1691
1692 Limitations
1693 Without the use of “–vfs-cache-mode” this can only write files sequen‐
1694 tially, it can only seek when reading. This means that many applica‐
1695 tions won't work with their files on an rclone mount without
1696 “–vfs-cache-mode writes” or “–vfs-cache-mode full”. See the File
1697 Caching section for more info.
1698
1699 The bucket based remotes (eg Swift, S3, Google Compute Storage, B2, Hu‐
1700 bic) won't work from the root - you will need to specify a bucket, or a
1701 path within the bucket. So swift: won't work whereas swift:bucket will
1702 as will swift:bucket/path. None of these support the concept of direc‐
1703 tories, so empty directories will have a tendency to disappear once
1704 they fall out of the directory cache.
1705
1706 Only supported on Linux, FreeBSD, OS X and Windows at the moment.
1707
1708 rclone mount vs rclone sync/copy
1709 File systems expect things to be 100% reliable, whereas cloud storage
1710 systems are a long way from 100% reliable. The rclone sync/copy com‐
1711 mands cope with this with lots of retries. However rclone mount can't
1712 use retries in the same way without making local copies of the uploads.
1713 Look at the file caching for solutions to make mount more reliable.
1714
1715 Attribute caching
1716 You can use the flag –attr-timeout to set the time the kernel caches
1717 the attributes (size, modification time etc) for directory entries.
1718
1719 The default is “1s” which caches files just long enough to avoid too
1720 many callbacks to rclone from the kernel.
1721
1722 In theory 0s should be the correct value for filesystems which can
1723 change outside the control of the kernel. However this causes quite a
1724 few problems such as rclone using too much memory
1725 (https://github.com/ncw/rclone/issues/2157), rclone not serving files
1726 to samba (https://forum.rclone.org/t/rclone-1-39-vs-1-40-mount-is‐
1727 sue/5112) and excessive time listing directories
1728 (https://github.com/ncw/rclone/issues/2095#issuecomment-371141147).
1729
1730 The kernel can cache the info about a file for the time given by “–at‐
1731 tr-timeout”. You may see corruption if the remote file changes length
1732 during this window. It will show up as either a truncated file or a
1733 file with garbage on the end. With “–attr-timeout 1s” this is very un‐
1734 likely but not impossible. The higher you set “–attr-timeout” the more
1735 likely it is. The default setting of “1s” is the lowest setting which
1736 mitigates the problems above.
1737
1738 If you set it higher (`10s' or `1m' say) then the kernel will call back
1739 to rclone less often making it more efficient, however there is more
1740 chance of the corruption issue above.
1741
1742 If files don't change on the remote outside of the control of rclone
1743 then there is no chance of corruption.
1744
1745 This is the same as setting the attr_timeout option in mount.fuse.
1746
1747 Filters
1748 Note that all the rclone filters can be used to select a subset of the
1749 files to be visible in the mount.
1750
1751 systemd
1752 When running rclone mount as a systemd service, it is possible to use
1753 Type=notify. In this case the service will enter the started state af‐
1754 ter the mountpoint has been successfully set up. Units having the
1755 rclone mount service specified as a requirement will see all files and
1756 folders immediately in this mode.
1757
1758 chunked reading
1759 –vfs-read-chunk-size will enable reading the source objects in parts.
1760 This can reduce the used download quota for some remotes by requesting
1761 only chunks from the remote that are actually read at the cost of an
1762 increased number of requests.
1763
1764 When –vfs-read-chunk-size-limit is also specified and greater than
1765 –vfs-read-chunk-size, the chunk size for each open file will get dou‐
1766 bled for each chunk read, until the specified value is reached. A val‐
1767 ue of -1 will disable the limit and the chunk size will grow indefi‐
1768 nitely.
1769
1770 With –vfs-read-chunk-size 100M and –vfs-read-chunk-size-limit 0 the
1771 following parts will be downloaded: 0-100M, 100M-200M, 200M-300M,
1772 300M-400M and so on. When –vfs-read-chunk-size-limit 500M is speci‐
1773 fied, the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M,
1774 1200M-1700M and so on.
1775
1776 Chunked reading will only work with –vfs-cache-mode < full, as the file
1777 will always be copied to the vfs cache before opening with
1778 –vfs-cache-mode full.
1779
1780 Directory Cache
1781 Using the --dir-cache-time flag, you can set how long a directory
1782 should be considered up to date and not refreshed from the backend.
1783 Changes made locally in the mount may appear immediately or invalidate
1784 the cache. However, changes done on the remote will only be picked up
1785 once the cache expires.
1786
1787 Alternatively, you can send a SIGHUP signal to rclone for it to flush
1788 all directory caches, regardless of how old they are. Assuming only
1789 one rclone instance is running, you can reset the cache like this:
1790
1791 kill -SIGHUP $(pidof rclone)
1792
1793 If you configure rclone with a remote control (/rc) then you can use
1794 rclone rc to flush the whole directory cache:
1795
1796 rclone rc vfs/forget
1797
1798 Or individual files or directories:
1799
1800 rclone rc vfs/forget file=path/to/file dir=path/to/dir
1801
1802 File Buffering
1803 The --buffer-size flag determines the amount of memory, that will be
1804 used to buffer data in advance.
1805
1806 Each open file descriptor will try to keep the specified amount of data
1807 in memory at all times. The buffered data is bound to one file de‐
1808 scriptor and won't be shared between multiple open file descriptors of
1809 the same file.
1810
1811 This flag is a upper limit for the used memory per file descriptor.
1812 The buffer will only use memory for data that is downloaded but not not
1813 yet read. If the buffer is empty, only a small amount of memory will
1814 be used. The maximum memory used by rclone for buffering can be up to
1815 --buffer-size * open files.
1816
1817 File Caching
1818 These flags control the VFS file caching options. The VFS layer is
1819 used by rclone mount to make a cloud storage system work more like a
1820 normal file system.
1821
1822 You'll need to enable VFS caching if you want, for example, to read and
1823 write simultaneously to a file. See below for more details.
1824
1825 Note that the VFS cache works in addition to the cache backend and you
1826 may find that you need one or the other or both.
1827
1828 --cache-dir string Directory rclone will use for caching.
1829 --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
1830 --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
1831 --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
1832 --vfs-cache-max-size int Max total size of objects in the cache. (default off)
1833
1834 If run with -vv rclone will print the location of the file cache. The
1835 files are stored in the user cache file area which is OS dependent but
1836 can be controlled with --cache-dir or setting the appropriate environ‐
1837 ment variable.
1838
1839 The cache has 4 different modes selected by --vfs-cache-mode. The
1840 higher the cache mode the more compatible rclone becomes at the cost of
1841 using disk space.
1842
1843 Note that files are written back to the remote only when they are
1844 closed so if rclone is quit or dies with open files then these won't
1845 get written back to the remote. However they will still be in the on
1846 disk cache.
1847
1848 If using –vfs-cache-max-size note that the cache may exceed this size
1849 for two reasons. Firstly because it is only checked every
1850 –vfs-cache-poll-interval. Secondly because open files cannot be evict‐
1851 ed from the cache.
1852
1853 –vfs-cache-mode off
1854 In this mode the cache will read directly from the remote and write di‐
1855 rectly to the remote without caching anything on disk.
1856
1857 This will mean some operations are not possible
1858
1859 · Files can't be opened for both read AND write
1860
1861 · Files opened for write can't be seeked
1862
1863 · Existing files opened for write must have O_TRUNC set
1864
1865 · Files open for read with O_TRUNC will be opened write only
1866
1867 · Files open for write only will behave as if O_TRUNC was supplied
1868
1869 · Open modes O_APPEND, O_TRUNC are ignored
1870
1871 · If an upload fails it can't be retried
1872
1873 –vfs-cache-mode minimal
1874 This is very similar to “off” except that files opened for read AND
1875 write will be buffered to disks. This means that files opened for
1876 write will be a lot more compatible, but uses the minimal disk space.
1877
1878 These operations are not possible
1879
1880 · Files opened for write only can't be seeked
1881
1882 · Existing files opened for write must have O_TRUNC set
1883
1884 · Files opened for write only will ignore O_APPEND, O_TRUNC
1885
1886 · If an upload fails it can't be retried
1887
1888 –vfs-cache-mode writes
1889 In this mode files opened for read only are still read directly from
1890 the remote, write only and read/write files are buffered to disk first.
1891
1892 This mode should support all normal file system operations.
1893
1894 If an upload fails it will be retried up to –low-level-retries times.
1895
1896 –vfs-cache-mode full
1897 In this mode all reads and writes are buffered to and from disk. When
1898 a file is opened for read it will be downloaded in its entirety first.
1899
1900 This may be appropriate for your needs, or you may prefer to look at
1901 the cache backend which does a much more sophisticated job of caching,
1902 including caching directory hierarchies and chunks of files.
1903
1904 In this mode, unlike the others, when a file is written to the disk, it
1905 will be kept on the disk after it is written to the remote. It will be
1906 purged on a schedule according to --vfs-cache-max-age.
1907
1908 This mode should support all normal file system operations.
1909
1910 If an upload or download fails it will be retried up to –low-level-re‐
1911 tries times.
1912
1913 rclone mount remote:path /path/to/mountpoint [flags]
1914
1915 Options
1916 --allow-non-empty Allow mounting over a non-empty directory.
1917 --allow-other Allow access to other users.
1918 --allow-root Allow access to root user.
1919 --attr-timeout duration Time for which file/directory attributes are cached. (default 1s)
1920 --daemon Run mount as a daemon (background mode).
1921 --daemon-timeout duration Time limit for rclone to respond to kernel (not supported by all OSes).
1922 --debug-fuse Debug the FUSE internals - needs -v.
1923 --default-permissions Makes kernel enforce access control based on the file mode.
1924 --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
1925 --dir-perms FileMode Directory permissions (default 0777)
1926 --file-perms FileMode File permissions (default 0666)
1927 --fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required.
1928 --gid uint32 Override the gid field set by the filesystem. (default 502)
1929 -h, --help help for mount
1930 --max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads. (default 128k)
1931 --no-checksum Don't compare checksums on up/download.
1932 --no-modtime Don't read/write the modification time (can speed things up).
1933 --no-seek Don't allow seeking in files.
1934 -o, --option stringArray Option for libfuse/WinFsp. Repeat if required.
1935 --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
1936 --read-only Mount read-only.
1937 --uid uint32 Override the uid field set by the filesystem. (default 502)
1938 --umask int Override the permission bits set by the filesystem.
1939 --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
1940 --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
1941 --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
1942 --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
1943 --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
1944 --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
1945 --volname string Set the volume name (not supported by all OSes).
1946 --write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.
1947
1948 rclone moveto
1949 Move file or directory from source to dest.
1950
1951 Synopsis
1952 If source:path is a file or directory then it moves it to a file or di‐
1953 rectory named dest:path.
1954
1955 This can be used to rename files or upload single files to other than
1956 their existing name. If the source is a directory then it acts exacty
1957 like the move command.
1958
1959 So
1960
1961 rclone moveto src dst
1962
1963 where src and dst are rclone paths, either remote:path or /path/to/lo‐
1964 cal or C:.
1965
1966 This will:
1967
1968 if src is file
1969 move it to dst, overwriting an existing file if it exists
1970 if src is directory
1971 move it to dst, overwriting existing files if they exist
1972 see move command for full details
1973
1974 This doesn't transfer unchanged files, testing by size and modification
1975 time or MD5SUM. src will be deleted on successful transfer.
1976
1977 Important: Since this can cause data loss, test first with the –dry-run
1978 flag.
1979
1980 Note: Use the -P/--progress flag to view real-time transfer statistics.
1981
1982 rclone moveto source:path dest:path [flags]
1983
1984 Options
1985 -h, --help help for moveto
1986
1987 rclone ncdu
1988 Explore a remote with a text based user interface.
1989
1990 Synopsis
1991 This displays a text based user interface allowing the navigation of a
1992 remote. It is most useful for answering the question - “What is using
1993 all my disk space?”.
1994
1995 To make the user interface it first scans the entire remote given and
1996 builds an in memory representation. rclone ncdu can be used during
1997 this scanning phase and you will see it building up the directory
1998 structure as it goes along.
1999
2000 Here are the keys - press `?' to toggle the help on and off
2001
2002 ↑,↓ or k,j to Move
2003 →,l to enter
2004 ←,h to return
2005 c toggle counts
2006 g toggle graph
2007 n,s,C sort by name,size,count
2008 d delete file/directory
2009 ^L refresh screen
2010 ? to toggle help on and off
2011 q/ESC/c-C to quit
2012
2013 This an homage to the ncdu tool (https://dev.yorhel.nl/ncdu) but for
2014 rclone remotes. It is missing lots of features at the moment but is
2015 useful as it stands.
2016
2017 Note that it might take some time to delete big files/folders. The UI
2018 won't respond in the meantime since the deletion is done synchronously.
2019
2020 rclone ncdu remote:path [flags]
2021
2022 Options
2023 -h, --help help for ncdu
2024
2025 rclone obscure
2026 Obscure password for use in the rclone.conf
2027
2028 Synopsis
2029 Obscure password for use in the rclone.conf
2030
2031 rclone obscure password [flags]
2032
2033 Options
2034 -h, --help help for obscure
2035
2036 rclone rc
2037 Run a command against a running rclone.
2038
2039 Synopsis
2040 This runs a command against a running rclone. Use the –url flag to
2041 specify an non default URL to connect on. This can be either a “:port”
2042 which is taken to mean “http://localhost:port” or a “host:port” which
2043 is taken to mean “http://host:port”
2044
2045 A username and password can be passed in with –user and –pass.
2046
2047 Note that –rc-addr, –rc-user, –rc-pass will be read also for –url, –us‐
2048 er, –pass.
2049
2050 Arguments should be passed in as parameter=value.
2051
2052 The result will be returned as a JSON object by default.
2053
2054 The –json parameter can be used to pass in a JSON blob as an input in‐
2055 stead of key=value arguments. This is the only way of passing in more
2056 complicated values.
2057
2058 Use “rclone rc” to see a list of all possible commands.
2059
2060 rclone rc commands parameter [flags]
2061
2062 Options
2063 -h, --help help for rc
2064 --json string Input JSON - use instead of key=value args.
2065 --no-output If set don't output the JSON result.
2066 --pass string Password to use to connect to rclone remote control.
2067 --url string URL to connect to rclone remote control. (default "http://localhost:5572/")
2068 --user string Username to use to rclone remote control.
2069
2070 rclone rcat
2071 Copies standard input to file on remote.
2072
2073 Synopsis
2074 rclone rcat reads from standard input (stdin) and copies it to a single
2075 remote file.
2076
2077 echo "hello world" | rclone rcat remote:path/to/file
2078 ffmpeg - | rclone rcat remote:path/to/file
2079
2080 If the remote file already exists, it will be overwritten.
2081
2082 rcat will try to upload small files in a single request, which is usu‐
2083 ally more efficient than the streaming/chunked upload endpoints, which
2084 use multiple requests. Exact behaviour depends on the remote. What is
2085 considered a small file may be set through --streaming-upload-cutoff.
2086 Uploading only starts after the cutoff is reached or if the file ends
2087 before that. The data must fit into RAM. The cutoff needs to be small
2088 enough to adhere the limits of your remote, please see there. General‐
2089 ly speaking, setting this cutoff too high will decrease your perfor‐
2090 mance.
2091
2092 Note that the upload can also not be retried because the data is not
2093 kept around until the upload succeeds. If you need to transfer a lot
2094 of data, you're better off caching locally and then rclone move it to
2095 the destination.
2096
2097 rclone rcat remote:path [flags]
2098
2099 Options
2100 -h, --help help for rcat
2101
2102 rclone rcd
2103 Run rclone listening to remote control commands only.
2104
2105 Synopsis
2106 This runs rclone so that it only listens to remote control commands.
2107
2108 This is useful if you are controlling rclone via the rc API.
2109
2110 If you pass in a path to a directory, rclone will serve that directory
2111 for GET requests on the URL passed in. It will also open the URL in
2112 the browser when rclone is run.
2113
2114 See the rc documentation (https://rclone.org/rc/) for more info on the
2115 rc flags.
2116
2117 rclone rcd <path to files to serve>* [flags]
2118
2119 Options
2120 -h, --help help for rcd
2121
2122 rclone rmdirs
2123 Remove empty directories under the path.
2124
2125 Synopsis
2126 This removes any empty directories (or directories that only contain
2127 empty directories) under the path that it finds, including the path if
2128 it has nothing in.
2129
2130 If you supply the –leave-root flag, it will not remove the root direc‐
2131 tory.
2132
2133 This is useful for tidying up remotes that rclone has left a lot of
2134 empty directories in.
2135
2136 rclone rmdirs remote:path [flags]
2137
2138 Options
2139 -h, --help help for rmdirs
2140 --leave-root Do not remove root directory if empty
2141
2142 rclone serve
2143 Serve a remote over a protocol.
2144
2145 Synopsis
2146 rclone serve is used to serve a remote over a given protocol. This
2147 command requires the use of a subcommand to specify the protocol, eg
2148
2149 rclone serve http remote:
2150
2151 Each subcommand has its own options which you can see in their help.
2152
2153 rclone serve <protocol> [opts] <remote> [flags]
2154
2155 Options
2156 -h, --help help for serve
2157
2158 rclone serve dlna
2159 Serve remote:path over DLNA
2160
2161 Synopsis
2162 rclone serve dlna is a DLNA media server for media stored in a rclone
2163 remote. Many devices, such as the Xbox and PlayStation, can automati‐
2164 cally discover this server in the LAN and play audio/video from it.
2165 VLC is also supported. Service discovery uses UDP multicast packets
2166 (SSDP) and will thus only work on LANs.
2167
2168 Rclone will list all files present in the remote, without filtering
2169 based on media formats or file extensions. Additionally, there is no
2170 media transcoding support. This means that some players might show
2171 files that they are not able to play back correctly.
2172
2173 Server options
2174 Use –addr to specify which IP address and port the server should listen
2175 on, eg –addr 1.2.3.4:8000 or –addr :8080 to listen to all IPs.
2176
2177 Directory Cache
2178 Using the --dir-cache-time flag, you can set how long a directory
2179 should be considered up to date and not refreshed from the backend.
2180 Changes made locally in the mount may appear immediately or invalidate
2181 the cache. However, changes done on the remote will only be picked up
2182 once the cache expires.
2183
2184 Alternatively, you can send a SIGHUP signal to rclone for it to flush
2185 all directory caches, regardless of how old they are. Assuming only
2186 one rclone instance is running, you can reset the cache like this:
2187
2188 kill -SIGHUP $(pidof rclone)
2189
2190 If you configure rclone with a remote control (/rc) then you can use
2191 rclone rc to flush the whole directory cache:
2192
2193 rclone rc vfs/forget
2194
2195 Or individual files or directories:
2196
2197 rclone rc vfs/forget file=path/to/file dir=path/to/dir
2198
2199 File Buffering
2200 The --buffer-size flag determines the amount of memory, that will be
2201 used to buffer data in advance.
2202
2203 Each open file descriptor will try to keep the specified amount of data
2204 in memory at all times. The buffered data is bound to one file de‐
2205 scriptor and won't be shared between multiple open file descriptors of
2206 the same file.
2207
2208 This flag is a upper limit for the used memory per file descriptor.
2209 The buffer will only use memory for data that is downloaded but not not
2210 yet read. If the buffer is empty, only a small amount of memory will
2211 be used. The maximum memory used by rclone for buffering can be up to
2212 --buffer-size * open files.
2213
2214 File Caching
2215 These flags control the VFS file caching options. The VFS layer is
2216 used by rclone mount to make a cloud storage system work more like a
2217 normal file system.
2218
2219 You'll need to enable VFS caching if you want, for example, to read and
2220 write simultaneously to a file. See below for more details.
2221
2222 Note that the VFS cache works in addition to the cache backend and you
2223 may find that you need one or the other or both.
2224
2225 --cache-dir string Directory rclone will use for caching.
2226 --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
2227 --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
2228 --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
2229 --vfs-cache-max-size int Max total size of objects in the cache. (default off)
2230
2231 If run with -vv rclone will print the location of the file cache. The
2232 files are stored in the user cache file area which is OS dependent but
2233 can be controlled with --cache-dir or setting the appropriate environ‐
2234 ment variable.
2235
2236 The cache has 4 different modes selected by --vfs-cache-mode. The
2237 higher the cache mode the more compatible rclone becomes at the cost of
2238 using disk space.
2239
2240 Note that files are written back to the remote only when they are
2241 closed so if rclone is quit or dies with open files then these won't
2242 get written back to the remote. However they will still be in the on
2243 disk cache.
2244
2245 If using –vfs-cache-max-size note that the cache may exceed this size
2246 for two reasons. Firstly because it is only checked every
2247 –vfs-cache-poll-interval. Secondly because open files cannot be evict‐
2248 ed from the cache.
2249
2250 –vfs-cache-mode off
2251 In this mode the cache will read directly from the remote and write di‐
2252 rectly to the remote without caching anything on disk.
2253
2254 This will mean some operations are not possible
2255
2256 · Files can't be opened for both read AND write
2257
2258 · Files opened for write can't be seeked
2259
2260 · Existing files opened for write must have O_TRUNC set
2261
2262 · Files open for read with O_TRUNC will be opened write only
2263
2264 · Files open for write only will behave as if O_TRUNC was supplied
2265
2266 · Open modes O_APPEND, O_TRUNC are ignored
2267
2268 · If an upload fails it can't be retried
2269
2270 –vfs-cache-mode minimal
2271 This is very similar to “off” except that files opened for read AND
2272 write will be buffered to disks. This means that files opened for
2273 write will be a lot more compatible, but uses the minimal disk space.
2274
2275 These operations are not possible
2276
2277 · Files opened for write only can't be seeked
2278
2279 · Existing files opened for write must have O_TRUNC set
2280
2281 · Files opened for write only will ignore O_APPEND, O_TRUNC
2282
2283 · If an upload fails it can't be retried
2284
2285 –vfs-cache-mode writes
2286 In this mode files opened for read only are still read directly from
2287 the remote, write only and read/write files are buffered to disk first.
2288
2289 This mode should support all normal file system operations.
2290
2291 If an upload fails it will be retried up to –low-level-retries times.
2292
2293 –vfs-cache-mode full
2294 In this mode all reads and writes are buffered to and from disk. When
2295 a file is opened for read it will be downloaded in its entirety first.
2296
2297 This may be appropriate for your needs, or you may prefer to look at
2298 the cache backend which does a much more sophisticated job of caching,
2299 including caching directory hierarchies and chunks of files.
2300
2301 In this mode, unlike the others, when a file is written to the disk, it
2302 will be kept on the disk after it is written to the remote. It will be
2303 purged on a schedule according to --vfs-cache-max-age.
2304
2305 This mode should support all normal file system operations.
2306
2307 If an upload or download fails it will be retried up to –low-level-re‐
2308 tries times.
2309
2310 rclone serve dlna remote:path [flags]
2311
2312 Options
2313 --addr string ip:port or :port to bind the DLNA http server to. (default ":7879")
2314 --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
2315 --dir-perms FileMode Directory permissions (default 0777)
2316 --file-perms FileMode File permissions (default 0666)
2317 --gid uint32 Override the gid field set by the filesystem. (default 502)
2318 -h, --help help for dlna
2319 --no-checksum Don't compare checksums on up/download.
2320 --no-modtime Don't read/write the modification time (can speed things up).
2321 --no-seek Don't allow seeking in files.
2322 --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
2323 --read-only Mount read-only.
2324 --uid uint32 Override the uid field set by the filesystem. (default 502)
2325 --umask int Override the permission bits set by the filesystem. (default 2)
2326 --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
2327 --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
2328 --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
2329 --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
2330 --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
2331 --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
2332
2333 rclone serve ftp
2334 Serve remote:path over FTP.
2335
2336 Synopsis
2337 rclone serve ftp implements a basic ftp server to serve the remote over
2338 FTP protocol. This can be viewed with a ftp client or you can make a
2339 remote of type ftp to read and write it.
2340
2341 Server options
2342 Use –addr to specify which IP address and port the server should listen
2343 on, eg –addr 1.2.3.4:8000 or –addr :8080 to listen to all IPs. By de‐
2344 fault it only listens on localhost. You can use port :0 to let the OS
2345 choose an available port.
2346
2347 If you set –addr to listen on a public or LAN accessible IP address
2348 then using Authentication is advised - see the next section for info.
2349
2350 Authentication
2351 By default this will serve files without needing a login.
2352
2353 You can set a single username and password with the –user and –pass
2354 flags.
2355
2356 Directory Cache
2357 Using the --dir-cache-time flag, you can set how long a directory
2358 should be considered up to date and not refreshed from the backend.
2359 Changes made locally in the mount may appear immediately or invalidate
2360 the cache. However, changes done on the remote will only be picked up
2361 once the cache expires.
2362
2363 Alternatively, you can send a SIGHUP signal to rclone for it to flush
2364 all directory caches, regardless of how old they are. Assuming only
2365 one rclone instance is running, you can reset the cache like this:
2366
2367 kill -SIGHUP $(pidof rclone)
2368
2369 If you configure rclone with a remote control (/rc) then you can use
2370 rclone rc to flush the whole directory cache:
2371
2372 rclone rc vfs/forget
2373
2374 Or individual files or directories:
2375
2376 rclone rc vfs/forget file=path/to/file dir=path/to/dir
2377
2378 File Buffering
2379 The --buffer-size flag determines the amount of memory, that will be
2380 used to buffer data in advance.
2381
2382 Each open file descriptor will try to keep the specified amount of data
2383 in memory at all times. The buffered data is bound to one file de‐
2384 scriptor and won't be shared between multiple open file descriptors of
2385 the same file.
2386
2387 This flag is a upper limit for the used memory per file descriptor.
2388 The buffer will only use memory for data that is downloaded but not not
2389 yet read. If the buffer is empty, only a small amount of memory will
2390 be used. The maximum memory used by rclone for buffering can be up to
2391 --buffer-size * open files.
2392
2393 File Caching
2394 These flags control the VFS file caching options. The VFS layer is
2395 used by rclone mount to make a cloud storage system work more like a
2396 normal file system.
2397
2398 You'll need to enable VFS caching if you want, for example, to read and
2399 write simultaneously to a file. See below for more details.
2400
2401 Note that the VFS cache works in addition to the cache backend and you
2402 may find that you need one or the other or both.
2403
2404 --cache-dir string Directory rclone will use for caching.
2405 --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
2406 --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
2407 --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
2408 --vfs-cache-max-size int Max total size of objects in the cache. (default off)
2409
2410 If run with -vv rclone will print the location of the file cache. The
2411 files are stored in the user cache file area which is OS dependent but
2412 can be controlled with --cache-dir or setting the appropriate environ‐
2413 ment variable.
2414
2415 The cache has 4 different modes selected by --vfs-cache-mode. The
2416 higher the cache mode the more compatible rclone becomes at the cost of
2417 using disk space.
2418
2419 Note that files are written back to the remote only when they are
2420 closed so if rclone is quit or dies with open files then these won't
2421 get written back to the remote. However they will still be in the on
2422 disk cache.
2423
2424 If using –vfs-cache-max-size note that the cache may exceed this size
2425 for two reasons. Firstly because it is only checked every
2426 –vfs-cache-poll-interval. Secondly because open files cannot be evict‐
2427 ed from the cache.
2428
2429 –vfs-cache-mode off
2430 In this mode the cache will read directly from the remote and write di‐
2431 rectly to the remote without caching anything on disk.
2432
2433 This will mean some operations are not possible
2434
2435 · Files can't be opened for both read AND write
2436
2437 · Files opened for write can't be seeked
2438
2439 · Existing files opened for write must have O_TRUNC set
2440
2441 · Files open for read with O_TRUNC will be opened write only
2442
2443 · Files open for write only will behave as if O_TRUNC was supplied
2444
2445 · Open modes O_APPEND, O_TRUNC are ignored
2446
2447 · If an upload fails it can't be retried
2448
2449 –vfs-cache-mode minimal
2450 This is very similar to “off” except that files opened for read AND
2451 write will be buffered to disks. This means that files opened for
2452 write will be a lot more compatible, but uses the minimal disk space.
2453
2454 These operations are not possible
2455
2456 · Files opened for write only can't be seeked
2457
2458 · Existing files opened for write must have O_TRUNC set
2459
2460 · Files opened for write only will ignore O_APPEND, O_TRUNC
2461
2462 · If an upload fails it can't be retried
2463
2464 –vfs-cache-mode writes
2465 In this mode files opened for read only are still read directly from
2466 the remote, write only and read/write files are buffered to disk first.
2467
2468 This mode should support all normal file system operations.
2469
2470 If an upload fails it will be retried up to –low-level-retries times.
2471
2472 –vfs-cache-mode full
2473 In this mode all reads and writes are buffered to and from disk. When
2474 a file is opened for read it will be downloaded in its entirety first.
2475
2476 This may be appropriate for your needs, or you may prefer to look at
2477 the cache backend which does a much more sophisticated job of caching,
2478 including caching directory hierarchies and chunks of files.
2479
2480 In this mode, unlike the others, when a file is written to the disk, it
2481 will be kept on the disk after it is written to the remote. It will be
2482 purged on a schedule according to --vfs-cache-max-age.
2483
2484 This mode should support all normal file system operations.
2485
2486 If an upload or download fails it will be retried up to –low-level-re‐
2487 tries times.
2488
2489 rclone serve ftp remote:path [flags]
2490
2491 Options
2492 --addr string IPaddress:Port or :Port to bind server to. (default "localhost:2121")
2493 --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
2494 --dir-perms FileMode Directory permissions (default 0777)
2495 --file-perms FileMode File permissions (default 0666)
2496 --gid uint32 Override the gid field set by the filesystem. (default 502)
2497 -h, --help help for ftp
2498 --no-checksum Don't compare checksums on up/download.
2499 --no-modtime Don't read/write the modification time (can speed things up).
2500 --no-seek Don't allow seeking in files.
2501 --pass string Password for authentication. (empty value allow every password)
2502 --passive-port string Passive port range to use. (default "30000-32000")
2503 --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
2504 --read-only Mount read-only.
2505 --uid uint32 Override the uid field set by the filesystem. (default 502)
2506 --umask int Override the permission bits set by the filesystem. (default 2)
2507 --user string User name for authentication. (default "anonymous")
2508 --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
2509 --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
2510 --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
2511 --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
2512 --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
2513 --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
2514
2515 rclone serve http
2516 Serve the remote over HTTP.
2517
2518 Synopsis
2519 rclone serve http implements a basic web server to serve the remote
2520 over HTTP. This can be viewed in a web browser or you can make a re‐
2521 mote of type http read from it.
2522
2523 You can use the filter flags (eg –include, –exclude) to control what is
2524 served.
2525
2526 The server will log errors. Use -v to see access logs.
2527
2528 –bwlimit will be respected for file transfers. Use –stats to control
2529 the stats printing.
2530
2531 Server options
2532 Use –addr to specify which IP address and port the server should listen
2533 on, eg –addr 1.2.3.4:8000 or –addr :8080 to listen to all IPs. By de‐
2534 fault it only listens on localhost. You can use port :0 to let the OS
2535 choose an available port.
2536
2537 If you set –addr to listen on a public or LAN accessible IP address
2538 then using Authentication is advised - see the next section for info.
2539
2540 –server-read-timeout and –server-write-timeout can be used to control
2541 the timeouts on the server. Note that this is the total time for a
2542 transfer.
2543
2544 –max-header-bytes controls the maximum number of bytes the server will
2545 accept in the HTTP header.
2546
2547 Authentication
2548 By default this will serve files without needing a login.
2549
2550 You can either use an htpasswd file which can take lots of users, or
2551 set a single username and password with the –user and –pass flags.
2552
2553 Use –htpasswd /path/to/htpasswd to provide an htpasswd file. This is
2554 in standard apache format and supports MD5, SHA1 and BCrypt for basic
2555 authentication. Bcrypt is recommended.
2556
2557 To create an htpasswd file:
2558
2559 touch htpasswd
2560 htpasswd -B htpasswd user
2561 htpasswd -B htpasswd anotherUser
2562
2563 The password file can be updated while rclone is running.
2564
2565 Use –realm to set the authentication realm.
2566
2567 SSL/TLS
2568 By default this will serve over http. If you want you can serve over
2569 https. You will need to supply the –cert and –key flags. If you wish
2570 to do client side certificate validation then you will need to supply
2571 –client-ca also.
2572
2573 –cert should be a either a PEM encoded certificate or a concatenation
2574 of that with the CA certificate. –key should be the PEM encoded pri‐
2575 vate key and –client-ca should be the PEM encoded client certificate
2576 authority certificate.
2577
2578 Directory Cache
2579 Using the --dir-cache-time flag, you can set how long a directory
2580 should be considered up to date and not refreshed from the backend.
2581 Changes made locally in the mount may appear immediately or invalidate
2582 the cache. However, changes done on the remote will only be picked up
2583 once the cache expires.
2584
2585 Alternatively, you can send a SIGHUP signal to rclone for it to flush
2586 all directory caches, regardless of how old they are. Assuming only
2587 one rclone instance is running, you can reset the cache like this:
2588
2589 kill -SIGHUP $(pidof rclone)
2590
2591 If you configure rclone with a remote control (/rc) then you can use
2592 rclone rc to flush the whole directory cache:
2593
2594 rclone rc vfs/forget
2595
2596 Or individual files or directories:
2597
2598 rclone rc vfs/forget file=path/to/file dir=path/to/dir
2599
2600 File Buffering
2601 The --buffer-size flag determines the amount of memory, that will be
2602 used to buffer data in advance.
2603
2604 Each open file descriptor will try to keep the specified amount of data
2605 in memory at all times. The buffered data is bound to one file de‐
2606 scriptor and won't be shared between multiple open file descriptors of
2607 the same file.
2608
2609 This flag is a upper limit for the used memory per file descriptor.
2610 The buffer will only use memory for data that is downloaded but not not
2611 yet read. If the buffer is empty, only a small amount of memory will
2612 be used. The maximum memory used by rclone for buffering can be up to
2613 --buffer-size * open files.
2614
2615 File Caching
2616 These flags control the VFS file caching options. The VFS layer is
2617 used by rclone mount to make a cloud storage system work more like a
2618 normal file system.
2619
2620 You'll need to enable VFS caching if you want, for example, to read and
2621 write simultaneously to a file. See below for more details.
2622
2623 Note that the VFS cache works in addition to the cache backend and you
2624 may find that you need one or the other or both.
2625
2626 --cache-dir string Directory rclone will use for caching.
2627 --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
2628 --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
2629 --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
2630 --vfs-cache-max-size int Max total size of objects in the cache. (default off)
2631
2632 If run with -vv rclone will print the location of the file cache. The
2633 files are stored in the user cache file area which is OS dependent but
2634 can be controlled with --cache-dir or setting the appropriate environ‐
2635 ment variable.
2636
2637 The cache has 4 different modes selected by --vfs-cache-mode. The
2638 higher the cache mode the more compatible rclone becomes at the cost of
2639 using disk space.
2640
2641 Note that files are written back to the remote only when they are
2642 closed so if rclone is quit or dies with open files then these won't
2643 get written back to the remote. However they will still be in the on
2644 disk cache.
2645
2646 If using –vfs-cache-max-size note that the cache may exceed this size
2647 for two reasons. Firstly because it is only checked every
2648 –vfs-cache-poll-interval. Secondly because open files cannot be evict‐
2649 ed from the cache.
2650
2651 –vfs-cache-mode off
2652 In this mode the cache will read directly from the remote and write di‐
2653 rectly to the remote without caching anything on disk.
2654
2655 This will mean some operations are not possible
2656
2657 · Files can't be opened for both read AND write
2658
2659 · Files opened for write can't be seeked
2660
2661 · Existing files opened for write must have O_TRUNC set
2662
2663 · Files open for read with O_TRUNC will be opened write only
2664
2665 · Files open for write only will behave as if O_TRUNC was supplied
2666
2667 · Open modes O_APPEND, O_TRUNC are ignored
2668
2669 · If an upload fails it can't be retried
2670
2671 –vfs-cache-mode minimal
2672 This is very similar to “off” except that files opened for read AND
2673 write will be buffered to disks. This means that files opened for
2674 write will be a lot more compatible, but uses the minimal disk space.
2675
2676 These operations are not possible
2677
2678 · Files opened for write only can't be seeked
2679
2680 · Existing files opened for write must have O_TRUNC set
2681
2682 · Files opened for write only will ignore O_APPEND, O_TRUNC
2683
2684 · If an upload fails it can't be retried
2685
2686 –vfs-cache-mode writes
2687 In this mode files opened for read only are still read directly from
2688 the remote, write only and read/write files are buffered to disk first.
2689
2690 This mode should support all normal file system operations.
2691
2692 If an upload fails it will be retried up to –low-level-retries times.
2693
2694 –vfs-cache-mode full
2695 In this mode all reads and writes are buffered to and from disk. When
2696 a file is opened for read it will be downloaded in its entirety first.
2697
2698 This may be appropriate for your needs, or you may prefer to look at
2699 the cache backend which does a much more sophisticated job of caching,
2700 including caching directory hierarchies and chunks of files.
2701
2702 In this mode, unlike the others, when a file is written to the disk, it
2703 will be kept on the disk after it is written to the remote. It will be
2704 purged on a schedule according to --vfs-cache-max-age.
2705
2706 This mode should support all normal file system operations.
2707
2708 If an upload or download fails it will be retried up to –low-level-re‐
2709 tries times.
2710
2711 rclone serve http remote:path [flags]
2712
2713 Options
2714 --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080")
2715 --cert string SSL PEM key (concatenation of certificate and CA certificate)
2716 --client-ca string Client certificate authority to verify clients with
2717 --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
2718 --dir-perms FileMode Directory permissions (default 0777)
2719 --file-perms FileMode File permissions (default 0666)
2720 --gid uint32 Override the gid field set by the filesystem. (default 502)
2721 -h, --help help for http
2722 --htpasswd string htpasswd file - if not provided no authentication is done
2723 --key string SSL PEM Private key
2724 --max-header-bytes int Maximum size of request header (default 4096)
2725 --no-checksum Don't compare checksums on up/download.
2726 --no-modtime Don't read/write the modification time (can speed things up).
2727 --no-seek Don't allow seeking in files.
2728 --pass string Password for authentication.
2729 --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
2730 --read-only Mount read-only.
2731 --realm string realm for authentication (default "rclone")
2732 --server-read-timeout duration Timeout for server reading data (default 1h0m0s)
2733 --server-write-timeout duration Timeout for server writing data (default 1h0m0s)
2734 --uid uint32 Override the uid field set by the filesystem. (default 502)
2735 --umask int Override the permission bits set by the filesystem. (default 2)
2736 --user string User name for authentication.
2737 --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
2738 --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
2739 --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
2740 --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
2741 --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
2742 --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
2743
2744 rclone serve restic
2745 Serve the remote for restic's REST API.
2746
2747 Synopsis
2748 rclone serve restic implements restic's REST backend API over HTTP.
2749 This allows restic to use rclone as a data storage mechanism for cloud
2750 providers that restic does not support directly.
2751
2752 Restic (https://restic.net/) is a command line program for doing back‐
2753 ups.
2754
2755 The server will log errors. Use -v to see access logs.
2756
2757 –bwlimit will be respected for file transfers. Use –stats to control
2758 the stats printing.
2759
2760 Setting up rclone for use by restic
2761 First set up a remote for your chosen cloud provider (/docs/#config‐
2762 ure).
2763
2764 Once you have set up the remote, check it is working with, for example
2765 “rclone lsd remote:”. You may have called the remote something other
2766 than “remote:” - just substitute whatever you called it in the follow‐
2767 ing instructions.
2768
2769 Now start the rclone restic server
2770
2771 rclone serve restic -v remote:backup
2772
2773 Where you can replace “backup” in the above by whatever path in the re‐
2774 mote you wish to use.
2775
2776 By default this will serve on “localhost:8080” you can change this with
2777 use of the “–addr” flag.
2778
2779 You might wish to start this server on boot.
2780
2781 Setting up restic to use rclone
2782 Now you can follow the restic instructions (http://restic.readthedo‐
2783 cs.io/en/latest/030_preparing_a_new_repo.html#rest-server) on setting
2784 up restic.
2785
2786 Note that you will need restic 0.8.2 or later to interoperate with
2787 rclone.
2788
2789 For the example above you will want to use “http://localhost:8080/” as
2790 the URL for the REST server.
2791
2792 For example:
2793
2794 $ export RESTIC_REPOSITORY=rest:http://localhost:8080/
2795 $ export RESTIC_PASSWORD=yourpassword
2796 $ restic init
2797 created restic backend 8b1a4b56ae at rest:http://localhost:8080/
2798
2799 Please note that knowledge of your password is required to access
2800 the repository. Losing your password means that your data is
2801 irrecoverably lost.
2802 $ restic backup /path/to/files/to/backup
2803 scan [/path/to/files/to/backup]
2804 scanned 189 directories, 312 files in 0:00
2805 [0:00] 100.00% 38.128 MiB / 38.128 MiB 501 / 501 items 0 errors ETA 0:00
2806 duration: 0:00
2807 snapshot 45c8fdd8 saved
2808
2809 Multiple repositories
2810 Note that you can use the endpoint to host multiple repositories. Do
2811 this by adding a directory name or path after the URL. Note that these
2812 must end with /. Eg
2813
2814 $ export RESTIC_REPOSITORY=rest:http://localhost:8080/user1repo/
2815 # backup user1 stuff
2816 $ export RESTIC_REPOSITORY=rest:http://localhost:8080/user2repo/
2817 # backup user2 stuff
2818
2819 Server options
2820 Use –addr to specify which IP address and port the server should listen
2821 on, eg –addr 1.2.3.4:8000 or –addr :8080 to listen to all IPs. By de‐
2822 fault it only listens on localhost. You can use port :0 to let the OS
2823 choose an available port.
2824
2825 If you set –addr to listen on a public or LAN accessible IP address
2826 then using Authentication is advised - see the next section for info.
2827
2828 –server-read-timeout and –server-write-timeout can be used to control
2829 the timeouts on the server. Note that this is the total time for a
2830 transfer.
2831
2832 –max-header-bytes controls the maximum number of bytes the server will
2833 accept in the HTTP header.
2834
2835 Authentication
2836 By default this will serve files without needing a login.
2837
2838 You can either use an htpasswd file which can take lots of users, or
2839 set a single username and password with the –user and –pass flags.
2840
2841 Use –htpasswd /path/to/htpasswd to provide an htpasswd file. This is
2842 in standard apache format and supports MD5, SHA1 and BCrypt for basic
2843 authentication. Bcrypt is recommended.
2844
2845 To create an htpasswd file:
2846
2847 touch htpasswd
2848 htpasswd -B htpasswd user
2849 htpasswd -B htpasswd anotherUser
2850
2851 The password file can be updated while rclone is running.
2852
2853 Use –realm to set the authentication realm.
2854
2855 SSL/TLS
2856 By default this will serve over http. If you want you can serve over
2857 https. You will need to supply the –cert and –key flags. If you wish
2858 to do client side certificate validation then you will need to supply
2859 –client-ca also.
2860
2861 –cert should be a either a PEM encoded certificate or a concatenation
2862 of that with the CA certificate. –key should be the PEM encoded pri‐
2863 vate key and –client-ca should be the PEM encoded client certificate
2864 authority certificate.
2865
2866 rclone serve restic remote:path [flags]
2867
2868 Options
2869 --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080")
2870 --append-only disallow deletion of repository data
2871 --cert string SSL PEM key (concatenation of certificate and CA certificate)
2872 --client-ca string Client certificate authority to verify clients with
2873 -h, --help help for restic
2874 --htpasswd string htpasswd file - if not provided no authentication is done
2875 --key string SSL PEM Private key
2876 --max-header-bytes int Maximum size of request header (default 4096)
2877 --pass string Password for authentication.
2878 --realm string realm for authentication (default "rclone")
2879 --server-read-timeout duration Timeout for server reading data (default 1h0m0s)
2880 --server-write-timeout duration Timeout for server writing data (default 1h0m0s)
2881 --stdio run an HTTP2 server on stdin/stdout
2882 --user string User name for authentication.
2883
2884 rclone serve webdav
2885 Serve remote:path over webdav.
2886
2887 Synopsis
2888 rclone serve webdav implements a basic webdav server to serve the re‐
2889 mote over HTTP via the webdav protocol. This can be viewed with a web‐
2890 dav client or you can make a remote of type webdav to read and write
2891 it.
2892
2893 Webdav options
2894 –etag-hash
2895 This controls the ETag header. Without this flag the ETag will be
2896 based on the ModTime and Size of the object.
2897
2898 If this flag is set to “auto” then rclone will choose the first sup‐
2899 ported hash on the backend or you can use a named hash such as “MD5” or
2900 “SHA-1”.
2901
2902 Use “rclone hashsum” to see the full list.
2903
2904 Server options
2905 Use –addr to specify which IP address and port the server should listen
2906 on, eg –addr 1.2.3.4:8000 or –addr :8080 to listen to all IPs. By de‐
2907 fault it only listens on localhost. You can use port :0 to let the OS
2908 choose an available port.
2909
2910 If you set –addr to listen on a public or LAN accessible IP address
2911 then using Authentication is advised - see the next section for info.
2912
2913 –server-read-timeout and –server-write-timeout can be used to control
2914 the timeouts on the server. Note that this is the total time for a
2915 transfer.
2916
2917 –max-header-bytes controls the maximum number of bytes the server will
2918 accept in the HTTP header.
2919
2920 Authentication
2921 By default this will serve files without needing a login.
2922
2923 You can either use an htpasswd file which can take lots of users, or
2924 set a single username and password with the –user and –pass flags.
2925
2926 Use –htpasswd /path/to/htpasswd to provide an htpasswd file. This is
2927 in standard apache format and supports MD5, SHA1 and BCrypt for basic
2928 authentication. Bcrypt is recommended.
2929
2930 To create an htpasswd file:
2931
2932 touch htpasswd
2933 htpasswd -B htpasswd user
2934 htpasswd -B htpasswd anotherUser
2935
2936 The password file can be updated while rclone is running.
2937
2938 Use –realm to set the authentication realm.
2939
2940 SSL/TLS
2941 By default this will serve over http. If you want you can serve over
2942 https. You will need to supply the –cert and –key flags. If you wish
2943 to do client side certificate validation then you will need to supply
2944 –client-ca also.
2945
2946 –cert should be a either a PEM encoded certificate or a concatenation
2947 of that with the CA certificate. –key should be the PEM encoded pri‐
2948 vate key and –client-ca should be the PEM encoded client certificate
2949 authority certificate.
2950
2951 Directory Cache
2952 Using the --dir-cache-time flag, you can set how long a directory
2953 should be considered up to date and not refreshed from the backend.
2954 Changes made locally in the mount may appear immediately or invalidate
2955 the cache. However, changes done on the remote will only be picked up
2956 once the cache expires.
2957
2958 Alternatively, you can send a SIGHUP signal to rclone for it to flush
2959 all directory caches, regardless of how old they are. Assuming only
2960 one rclone instance is running, you can reset the cache like this:
2961
2962 kill -SIGHUP $(pidof rclone)
2963
2964 If you configure rclone with a remote control (/rc) then you can use
2965 rclone rc to flush the whole directory cache:
2966
2967 rclone rc vfs/forget
2968
2969 Or individual files or directories:
2970
2971 rclone rc vfs/forget file=path/to/file dir=path/to/dir
2972
2973 File Buffering
2974 The --buffer-size flag determines the amount of memory, that will be
2975 used to buffer data in advance.
2976
2977 Each open file descriptor will try to keep the specified amount of data
2978 in memory at all times. The buffered data is bound to one file de‐
2979 scriptor and won't be shared between multiple open file descriptors of
2980 the same file.
2981
2982 This flag is a upper limit for the used memory per file descriptor.
2983 The buffer will only use memory for data that is downloaded but not not
2984 yet read. If the buffer is empty, only a small amount of memory will
2985 be used. The maximum memory used by rclone for buffering can be up to
2986 --buffer-size * open files.
2987
2988 File Caching
2989 These flags control the VFS file caching options. The VFS layer is
2990 used by rclone mount to make a cloud storage system work more like a
2991 normal file system.
2992
2993 You'll need to enable VFS caching if you want, for example, to read and
2994 write simultaneously to a file. See below for more details.
2995
2996 Note that the VFS cache works in addition to the cache backend and you
2997 may find that you need one or the other or both.
2998
2999 --cache-dir string Directory rclone will use for caching.
3000 --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
3001 --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
3002 --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
3003 --vfs-cache-max-size int Max total size of objects in the cache. (default off)
3004
3005 If run with -vv rclone will print the location of the file cache. The
3006 files are stored in the user cache file area which is OS dependent but
3007 can be controlled with --cache-dir or setting the appropriate environ‐
3008 ment variable.
3009
3010 The cache has 4 different modes selected by --vfs-cache-mode. The
3011 higher the cache mode the more compatible rclone becomes at the cost of
3012 using disk space.
3013
3014 Note that files are written back to the remote only when they are
3015 closed so if rclone is quit or dies with open files then these won't
3016 get written back to the remote. However they will still be in the on
3017 disk cache.
3018
3019 If using –vfs-cache-max-size note that the cache may exceed this size
3020 for two reasons. Firstly because it is only checked every
3021 –vfs-cache-poll-interval. Secondly because open files cannot be evict‐
3022 ed from the cache.
3023
3024 –vfs-cache-mode off
3025 In this mode the cache will read directly from the remote and write di‐
3026 rectly to the remote without caching anything on disk.
3027
3028 This will mean some operations are not possible
3029
3030 · Files can't be opened for both read AND write
3031
3032 · Files opened for write can't be seeked
3033
3034 · Existing files opened for write must have O_TRUNC set
3035
3036 · Files open for read with O_TRUNC will be opened write only
3037
3038 · Files open for write only will behave as if O_TRUNC was supplied
3039
3040 · Open modes O_APPEND, O_TRUNC are ignored
3041
3042 · If an upload fails it can't be retried
3043
3044 –vfs-cache-mode minimal
3045 This is very similar to “off” except that files opened for read AND
3046 write will be buffered to disks. This means that files opened for
3047 write will be a lot more compatible, but uses the minimal disk space.
3048
3049 These operations are not possible
3050
3051 · Files opened for write only can't be seeked
3052
3053 · Existing files opened for write must have O_TRUNC set
3054
3055 · Files opened for write only will ignore O_APPEND, O_TRUNC
3056
3057 · If an upload fails it can't be retried
3058
3059 –vfs-cache-mode writes
3060 In this mode files opened for read only are still read directly from
3061 the remote, write only and read/write files are buffered to disk first.
3062
3063 This mode should support all normal file system operations.
3064
3065 If an upload fails it will be retried up to –low-level-retries times.
3066
3067 –vfs-cache-mode full
3068 In this mode all reads and writes are buffered to and from disk. When
3069 a file is opened for read it will be downloaded in its entirety first.
3070
3071 This may be appropriate for your needs, or you may prefer to look at
3072 the cache backend which does a much more sophisticated job of caching,
3073 including caching directory hierarchies and chunks of files.
3074
3075 In this mode, unlike the others, when a file is written to the disk, it
3076 will be kept on the disk after it is written to the remote. It will be
3077 purged on a schedule according to --vfs-cache-max-age.
3078
3079 This mode should support all normal file system operations.
3080
3081 If an upload or download fails it will be retried up to –low-level-re‐
3082 tries times.
3083
3084 rclone serve webdav remote:path [flags]
3085
3086 Options
3087 --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080")
3088 --cert string SSL PEM key (concatenation of certificate and CA certificate)
3089 --client-ca string Client certificate authority to verify clients with
3090 --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
3091 --dir-perms FileMode Directory permissions (default 0777)
3092 --etag-hash string Which hash to use for the ETag, or auto or blank for off
3093 --file-perms FileMode File permissions (default 0666)
3094 --gid uint32 Override the gid field set by the filesystem. (default 502)
3095 -h, --help help for webdav
3096 --htpasswd string htpasswd file - if not provided no authentication is done
3097 --key string SSL PEM Private key
3098 --max-header-bytes int Maximum size of request header (default 4096)
3099 --no-checksum Don't compare checksums on up/download.
3100 --no-modtime Don't read/write the modification time (can speed things up).
3101 --no-seek Don't allow seeking in files.
3102 --pass string Password for authentication.
3103 --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
3104 --read-only Mount read-only.
3105 --realm string realm for authentication (default "rclone")
3106 --server-read-timeout duration Timeout for server reading data (default 1h0m0s)
3107 --server-write-timeout duration Timeout for server writing data (default 1h0m0s)
3108 --uid uint32 Override the uid field set by the filesystem. (default 502)
3109 --umask int Override the permission bits set by the filesystem. (default 2)
3110 --user string User name for authentication.
3111 --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
3112 --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
3113 --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
3114 --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
3115 --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
3116 --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
3117
3118 rclone settier
3119 Changes storage class/tier of objects in remote.
3120
3121 Synopsis
3122 rclone settier changes storage tier or class at remote if supported.
3123 Few cloud storage services provides different storage classes on ob‐
3124 jects, for example AWS S3 and Glacier, Azure Blob storage - Hot, Cool
3125 and Archive, Google Cloud Storage, Regional Storage, Nearline, Coldline
3126 etc.
3127
3128 Note that, certain tier chages make objects not available to access im‐
3129 mediately. For example tiering to archive in azure blob storage makes
3130 objects in frozen state, user can restore by setting tier to Hot/Cool,
3131 similarly S3 to Glacier makes object inaccessible.true
3132
3133 You can use it to tier single object
3134
3135 rclone settier Cool remote:path/file
3136
3137 Or use rclone filters to set tier on only specific files
3138
3139 rclone --include "*.txt" settier Hot remote:path/dir
3140
3141 Or just provide remote directory and all files in directory will be
3142 tiered
3143
3144 rclone settier tier remote:path/dir
3145
3146 rclone settier tier remote:path [flags]
3147
3148 Options
3149 -h, --help help for settier
3150
3151 rclone touch
3152 Create new file or change file modification time.
3153
3154 Synopsis
3155 Create new file or change file modification time.
3156
3157 rclone touch remote:path [flags]
3158
3159 Options
3160 -h, --help help for touch
3161 -C, --no-create Do not create the file if it does not exist.
3162 -t, --timestamp string Change the modification times to the specified time instead of the current time of day. The argument is of the form 'YYMMDD' (ex. 17.10.30) or 'YYYY-MM-DDTHH:MM:SS' (ex. 2006-01-02T15:04:05)
3163
3164 rclone tree
3165 List the contents of the remote in a tree like fashion.
3166
3167 Synopsis
3168 rclone tree lists the contents of a remote in a similar way to the unix
3169 tree command.
3170
3171 For example
3172
3173 $ rclone tree remote:path
3174 /
3175 ├── file1
3176 ├── file2
3177 ├── file3
3178 └── subdir
3179 ├── file4
3180 └── file5
3181
3182 1 directories, 5 files
3183
3184 You can use any of the filtering options with the tree command (eg –in‐
3185 clude and –exclude). You can also use –fast-list.
3186
3187 The tree command has many options for controlling the listing which are
3188 compatible with the tree command. Note that not all of them have short
3189 options as they conflict with rclone's short options.
3190
3191 rclone tree remote:path [flags]
3192
3193 Options
3194 -a, --all All files are listed (list . files too).
3195 -C, --color Turn colorization on always.
3196 -d, --dirs-only List directories only.
3197 --dirsfirst List directories before files (-U disables).
3198 --full-path Print the full path prefix for each file.
3199 -h, --help help for tree
3200 --human Print the size in a more human readable way.
3201 --level int Descend only level directories deep.
3202 -D, --modtime Print the date of last modification.
3203 -i, --noindent Don't print indentation lines.
3204 --noreport Turn off file/directory count at end of tree listing.
3205 -o, --output string Output to file instead of stdout.
3206 -p, --protections Print the protections for each file.
3207 -Q, --quote Quote filenames with double quotes.
3208 -s, --size Print the size in bytes of each file.
3209 --sort string Select sort: name,version,size,mtime,ctime.
3210 --sort-ctime Sort files by last status change time.
3211 -t, --sort-modtime Sort files by last modification time.
3212 -r, --sort-reverse Reverse the order of the sort.
3213 -U, --unsorted Leave files unsorted.
3214 --version Sort files alphanumerically by version.
3215
3216 Copying single files
3217 rclone normally syncs or copies directories. However, if the source
3218 remote points to a file, rclone will just copy that file. The destina‐
3219 tion remote must point to a directory - rclone will give the error
3220 Failed to create file system for "remote:file": is a file not a direc‐
3221 tory if it isn't.
3222
3223 For example, suppose you have a remote with a file in called test.jpg,
3224 then you could copy just that file like this
3225
3226 rclone copy remote:test.jpg /tmp/download
3227
3228 The file test.jpg will be placed inside /tmp/download.
3229
3230 This is equivalent to specifying
3231
3232 rclone copy --files-from /tmp/files remote: /tmp/download
3233
3234 Where /tmp/files contains the single line
3235
3236 test.jpg
3237
3238 It is recommended to use copy when copying individual files, not sync.
3239 They have pretty much the same effect but copy will use a lot less mem‐
3240 ory.
3241
3242 Syntax of remote paths
3243 The syntax of the paths passed to the rclone command are as follows.
3244
3245 /path/to/dir
3246 This refers to the local file system.
3247
3248 On Windows only \ may be used instead of / in local paths only, non lo‐
3249 cal paths must use /.
3250
3251 These paths needn't start with a leading / - if they don't then they
3252 will be relative to the current directory.
3253
3254 remote:path/to/dir
3255 This refers to a directory path/to/dir on remote: as defined in the
3256 config file (configured with rclone config).
3257
3258 remote:/path/to/dir
3259 On most backends this is refers to the same directory as re‐
3260 mote:path/to/dir and that format should be preferred. On a very small
3261 number of remotes (FTP, SFTP, Dropbox for business) this will refer to
3262 a different directory. On these, paths without a leading / will refer
3263 to your “home” directory and paths with a leading / will refer to the
3264 root.
3265
3266 :backend:path/to/dir
3267 This is an advanced form for creating remotes on the fly. backend
3268 should be the name or prefix of a backend (the type in the config file)
3269 and all the configuration for the backend should be provided on the
3270 command line (or in environment variables).
3271
3272 Here are some examples:
3273
3274 rclone lsd --http-url https://pub.rclone.org :http:
3275
3276 To list all the directories in the root of https://pub.rclone.org/.
3277
3278 rclone lsf --http-url https://example.com :http:path/to/dir
3279
3280 To list files and directories in https://example.com/path/to/dir/
3281
3282 rclone copy --http-url https://example.com :http:path/to/dir /tmp/dir
3283
3284 To copy files and directories in https://example.com/path/to/dir to
3285 /tmp/dir.
3286
3287 rclone copy --sftp-host example.com :sftp:path/to/dir /tmp/dir
3288
3289 To copy files and directories from example.com in the relative directo‐
3290 ry path/to/dir to /tmp/dir using sftp.
3291
3292 Quoting and the shell
3293 When you are typing commands to your computer you are using something
3294 called the command line shell. This interprets various characters in
3295 an OS specific way.
3296
3297 Here are some gotchas which may help users unfamiliar with the shell
3298 rules
3299
3300 Linux / OSX
3301 If your names have spaces or shell metacharacters (eg *, ?, $, ', "
3302 etc) then you must quote them. Use single quotes ' by default.
3303
3304 rclone copy 'Important files?' remote:backup
3305
3306 If you want to send a ' you will need to use ", eg
3307
3308 rclone copy "O'Reilly Reviews" remote:backup
3309
3310 The rules for quoting metacharacters are complicated and if you want
3311 the full details you'll have to consult the manual page for your shell.
3312
3313 Windows
3314 If your names have spaces in you need to put them in ", eg
3315
3316 rclone copy "E:\folder name\folder name\folder name" remote:backup
3317
3318 If you are using the root directory on its own then don't quote it (see
3319 #464 (https://github.com/ncw/rclone/issues/464) for why), eg
3320
3321 rclone copy E:\ remote:backup
3322
3323 Copying files or directories with : in the names
3324 rclone uses : to mark a remote name. This is, however, a valid file‐
3325 name component in non-Windows OSes. The remote name parser will only
3326 search for a : up to the first / so if you need to act on a file or di‐
3327 rectory like this then use the full path starting with a /, or use ./
3328 as a current directory prefix.
3329
3330 So to sync a directory called sync:me to a remote called remote: use
3331
3332 rclone sync ./sync:me remote:path
3333
3334 or
3335
3336 rclone sync /full/path/to/sync:me remote:path
3337
3338 Server Side Copy
3339 Most remotes (but not all - see the overview (/overview/#optional-fea‐
3340 tures)) support server side copy.
3341
3342 This means if you want to copy one folder to another then rclone won't
3343 download all the files and re-upload them; it will instruct the server
3344 to copy them in place.
3345
3346 Eg
3347
3348 rclone copy s3:oldbucket s3:newbucket
3349
3350 Will copy the contents of oldbucket to newbucket without downloading
3351 and re-uploading.
3352
3353 Remotes which don't support server side copy will download and re-up‐
3354 load in this case.
3355
3356 Server side copies are used with sync and copy and will be identified
3357 in the log when using the -v flag. The move command may also use them
3358 if remote doesn't support server side move directly. This is done by
3359 issuing a server side copy then a delete which is much quicker than a
3360 download and re-upload.
3361
3362 Server side copies will only be attempted if the remote names are the
3363 same.
3364
3365 This can be used when scripting to make aged backups efficiently, eg
3366
3367 rclone sync remote:current-backup remote:previous-backup
3368 rclone sync /path/to/files remote:current-backup
3369
3370 Options
3371 Rclone has a number of options to control its behaviour.
3372
3373 Options that take parameters can have the values passed in two ways,
3374 --option=value or --option value. However boolean (true/false) options
3375 behave slightly differently to the other options in that --boolean sets
3376 the option to true and the absence of the flag sets it to false. It is
3377 also possible to specify --boolean=false or --boolean=true. Note that
3378 --boolean false is not valid - this is parsed as --boolean and the
3379 false is parsed as an extra command line argument for rclone.
3380
3381 Options which use TIME use the go time parser. A duration string is a
3382 possibly signed sequence of decimal numbers, each with optional frac‐
3383 tion and a unit suffix, such as “300ms”, “-1.5h” or “2h45m”. Valid
3384 time units are “ns”, “us” (or “µs”), “ms”, “s”, “m”, “h”.
3385
3386 Options which use SIZE use kByte by default. However, a suffix of b
3387 for bytes, k for kBytes, M for MBytes, G for GBytes, T for TBytes and P
3388 for PBytes may be used. These are the binary units, eg 1, 2**10,
3389 2**20, 2**30 respectively.
3390
3391 –backup-dir=DIR
3392 When using sync, copy or move any files which would have been overwrit‐
3393 ten or deleted are moved in their original hierarchy into this directo‐
3394 ry.
3395
3396 If --suffix is set, then the moved files will have the suffix added to
3397 them. If there is a file with the same path (after the suffix has been
3398 added) in DIR, then it will be overwritten.
3399
3400 The remote in use must support server side move or copy and you must
3401 use the same remote as the destination of the sync. The backup direc‐
3402 tory must not overlap the destination directory.
3403
3404 For example
3405
3406 rclone sync /path/to/local remote:current --backup-dir remote:old
3407
3408 will sync /path/to/local to remote:current, but for any files which
3409 would have been updated or deleted will be stored in remote:old.
3410
3411 If running rclone from a script you might want to use today's date as
3412 the directory name passed to --backup-dir to store the old files, or
3413 you might want to pass --suffix with today's date.
3414
3415 –bind string
3416 Local address to bind to for outgoing connections. This can be an IPv4
3417 address (1.2.3.4), an IPv6 address (1234::789A) or host name. If the
3418 host name doesn't resolve or resolves to more than one IP address it
3419 will give an error.
3420
3421 –bwlimit=BANDWIDTH_SPEC
3422 This option controls the bandwidth limit. Limits can be specified in
3423 two ways: As a single limit, or as a timetable.
3424
3425 Single limits last for the duration of the session. To use a single
3426 limit, specify the desired bandwidth in kBytes/s, or use a suffix
3427 b|k|M|G. The default is 0 which means to not limit bandwidth.
3428
3429 For example, to limit bandwidth usage to 10 MBytes/s use --bwlimit 10M
3430
3431 It is also possible to specify a “timetable” of limits, which will
3432 cause certain limits to be applied at certain times. To specify a
3433 timetable, format your entries as “WEEKDAY-HH:MM,BANDWIDTH WEEK‐
3434 DAY-HH:MM,BANDWIDTH...” where: WEEKDAY is optional element. It could
3435 be written as whole world or only using 3 first characters. HH:MM is
3436 an hour from 00:00 to 23:59.
3437
3438 An example of a typical timetable to avoid link saturation during day‐
3439 time working hours could be:
3440
3441 --bwlimit "08:00,512 12:00,10M 13:00,512 18:00,30M 23:00,off"
3442
3443 In this example, the transfer bandwidth will be every day set to
3444 512kBytes/sec at 8am. At noon, it will raise to 10Mbytes/s, and drop
3445 back to 512kBytes/sec at 1pm. At 6pm, the bandwidth limit will be set
3446 to 30MBytes/s, and at 11pm it will be completely disabled (full speed).
3447 Anything between 11pm and 8am will remain unlimited.
3448
3449 An example of timetable with WEEKDAY could be:
3450
3451 --bwlimit "Mon-00:00,512 Fri-23:59,10M Sat-10:00,1M Sun-20:00,off"
3452
3453 It mean that, the transfer bandwidth will be set to 512kBytes/sec on
3454 Monday. It will raise to 10Mbytes/s before the end of Friday. At
3455 10:00 on Sunday it will be set to 1Mbyte/s. From 20:00 at Sunday will
3456 be unlimited.
3457
3458 Timeslots without weekday are extended to whole week. So this one ex‐
3459 ample:
3460
3461 --bwlimit "Mon-00:00,512 12:00,1M Sun-20:00,off"
3462
3463 Is equal to this:
3464
3465 --bwlim‐
3466 it "Mon-00:00,512Mon-12:00,1M Tue-12:00,1M Wed-12:00,1M Thu-12:00,1M Fri-12:00,1M Sat-12:00,1M Sun-12:00,1M Sun-20:00,off"
3467
3468 Bandwidth limits only apply to the data transfer. They don't apply to
3469 the bandwidth of the directory listings etc.
3470
3471 Note that the units are Bytes/s, not Bits/s. Typically connections are
3472 measured in Bits/s - to convert divide by 8. For example, let's say
3473 you have a 10 Mbit/s connection and you wish rclone to use half of it -
3474 5 Mbit/s. This is 5/8 = 0.625MByte/s so you would use a --bwlim‐
3475 it 0.625M parameter for rclone.
3476
3477 On Unix systems (Linux, MacOS, ...) the bandwidth limiter can be tog‐
3478 gled by sending a SIGUSR2 signal to rclone. This allows to remove the
3479 limitations of a long running rclone transfer and to restore it back to
3480 the value specified with --bwlimit quickly when needed. Assuming there
3481 is only one rclone instance running, you can toggle the limiter like
3482 this:
3483
3484 kill -SIGUSR2 $(pidof rclone)
3485
3486 If you configure rclone with a remote control (/rc) then you can use
3487 change the bwlimit dynamically:
3488
3489 rclone rc core/bwlimit rate=1M
3490
3491 –buffer-size=SIZE
3492 Use this sized buffer to speed up file transfers. Each --transfer will
3493 use this much memory for buffering.
3494
3495 When using mount or cmount each open file descriptor will use this much
3496 memory for buffering. See the mount (/commands/rclone_mount/#file-
3497 buffering) documentation for more details.
3498
3499 Set to 0 to disable the buffering for the minimum memory usage.
3500
3501 Note that the memory allocation of the buffers is influenced by the
3502 –use-mmap flag.
3503
3504 –checkers=N
3505 The number of checkers to run in parallel. Checkers do the equality
3506 checking of files during a sync. For some storage systems (eg S3,
3507 Swift, Dropbox) this can take a significant amount of time so they are
3508 run in parallel.
3509
3510 The default is to run 8 checkers in parallel.
3511
3512 -c, –checksum
3513 Normally rclone will look at modification time and size of files to see
3514 if they are equal. If you set this flag then rclone will check the
3515 file hash and size to determine if files are equal.
3516
3517 This is useful when the remote doesn't support setting modified time
3518 and a more accurate sync is desired than just checking the file size.
3519
3520 This is very useful when transferring between remotes which store the
3521 same hash type on the object, eg Drive and Swift. For details of which
3522 remotes support which hash type see the table in the overview section
3523 (https://rclone.org/overview/).
3524
3525 Eg rclone --checksum sync s3:/bucket swift:/bucket would run much
3526 quicker than without the --checksum flag.
3527
3528 When using this flag, rclone won't update mtimes of remote files if
3529 they are incorrect as it would normally.
3530
3531 –config=CONFIG_FILE
3532 Specify the location of the rclone config file.
3533
3534 Normally the config file is in your home directory as a file called
3535 .config/rclone/rclone.conf (or .rclone.conf if created with an older
3536 version). If $XDG_CONFIG_HOME is set it will be at $XDG_CON‐
3537 FIG_HOME/rclone/rclone.conf
3538
3539 If you run rclone config file you will see where the default location
3540 is for you.
3541
3542 Use this flag to override the config location, eg rclone --config=".my‐
3543 config" .config.
3544
3545 –contimeout=TIME
3546 Set the connection timeout. This should be in go time format which
3547 looks like 5s for 5 seconds, 10m for 10 minutes, or 3h30m.
3548
3549 The connection timeout is the amount of time rclone will wait for a
3550 connection to go through to a remote object storage system. It is 1m
3551 by default.
3552
3553 –dedupe-mode MODE
3554 Mode to run dedupe command in. One of interactive, skip, first, new‐
3555 est, oldest, rename. The default is interactive. See the dedupe com‐
3556 mand for more information as to what these options mean.
3557
3558 –disable FEATURE,FEATURE,...
3559 This disables a comma separated list of optional features. For example
3560 to disable server side move and server side copy use:
3561
3562 --disable move,copy
3563
3564 The features can be put in in any case.
3565
3566 To see a list of which features can be disabled use:
3567
3568 --disable help
3569
3570 See the overview features (/overview/#features) and optional features
3571 (/overview/#optional-features) to get an idea of which feature does
3572 what.
3573
3574 This flag can be useful for debugging and in exceptional circumstances
3575 (eg Google Drive limiting the total volume of Server Side Copies to
3576 100GB/day).
3577
3578 -n, –dry-run
3579 Do a trial run with no permanent changes. Use this to see what rclone
3580 would do without actually doing it. Useful when setting up the sync
3581 command which deletes files in the destination.
3582
3583 –ignore-checksum
3584 Normally rclone will check that the checksums of transferred files
3585 match, and give an error “corrupted on transfer” if they don't.
3586
3587 You can use this option to skip that check. You should only use it if
3588 you have had the “corrupted on transfer” error message and you are sure
3589 you might want to transfer potentially corrupted data.
3590
3591 –ignore-existing
3592 Using this option will make rclone unconditionally skip all files that
3593 exist on the destination, no matter the content of these files.
3594
3595 While this isn't a generally recommended option, it can be useful in
3596 cases where your files change due to encryption. However, it cannot
3597 correct partial transfers in case a transfer was interrupted.
3598
3599 –ignore-size
3600 Normally rclone will look at modification time and size of files to see
3601 if they are equal. If you set this flag then rclone will check only
3602 the modification time. If --checksum is set then it only checks the
3603 checksum.
3604
3605 It will also cause rclone to skip verifying the sizes are the same af‐
3606 ter transfer.
3607
3608 This can be useful for transferring files to and from OneDrive which
3609 occasionally misreports the size of image files (see #399
3610 (https://github.com/ncw/rclone/issues/399) for more info).
3611
3612 -I, –ignore-times
3613 Using this option will cause rclone to unconditionally upload all files
3614 regardless of the state of files on the destination.
3615
3616 Normally rclone would skip any files that have the same modification
3617 time and are the same size (or have the same checksum if using --check‐
3618 sum).
3619
3620 –immutable
3621 Treat source and destination files as immutable and disallow modifica‐
3622 tion.
3623
3624 With this option set, files will be created and deleted as requested,
3625 but existing files will never be updated. If an existing file does not
3626 match between the source and destination, rclone will give the error
3627 Source and destination exist but do not match: immutable file modified.
3628
3629 Note that only commands which transfer files (e.g. sync, copy, move)
3630 are affected by this behavior, and only modification is disallowed.
3631 Files may still be deleted explicitly (e.g. delete, purge) or implic‐
3632 itly (e.g. sync, move). Use copy --immutable if it is desired to
3633 avoid deletion as well as modification.
3634
3635 This can be useful as an additional layer of protection for immutable
3636 or append-only data sets (notably backup archives), where modification
3637 implies corruption and should not be propagated.
3638
3639 –leave-root
3640 During rmdirs it will not remove root directory, even if it's empty.
3641
3642 –log-file=FILE
3643 Log all of rclone's output to FILE. This is not active by default.
3644 This can be useful for tracking down problems with syncs in combination
3645 with the -v flag. See the Logging section for more info.
3646
3647 Note that if you are using the logrotate program to manage rclone's
3648 logs, then you should use the copytruncate option as rclone doesn't
3649 have a signal to rotate logs.
3650
3651 –log-format LIST
3652 Comma separated list of log format options. date, time, microseconds,
3653 longfile, shortfile, UTC. The default is “date,time”.
3654
3655 –log-level LEVEL
3656 This sets the log level for rclone. The default log level is NOTICE.
3657
3658 DEBUG is equivalent to -vv. It outputs lots of debug info - useful for
3659 bug reports and really finding out what rclone is doing.
3660
3661 INFO is equivalent to -v. It outputs information about each transfer
3662 and prints stats once a minute by default.
3663
3664 NOTICE is the default log level if no logging flags are supplied. It
3665 outputs very little when things are working normally. It outputs warn‐
3666 ings and significant events.
3667
3668 ERROR is equivalent to -q. It only outputs error messages.
3669
3670 –low-level-retries NUMBER
3671 This controls the number of low level retries rclone does.
3672
3673 A low level retry is used to retry a failing operation - typically one
3674 HTTP request. This might be uploading a chunk of a big file for exam‐
3675 ple. You will see low level retries in the log with the -v flag.
3676
3677 This shouldn't need to be changed from the default in normal opera‐
3678 tions. However, if you get a lot of low level retries you may wish to
3679 reduce the value so rclone moves on to a high level retry (see the
3680 --retries flag) quicker.
3681
3682 Disable low level retries with --low-level-retries 1.
3683
3684 –max-backlog=N
3685 This is the maximum allowable backlog of files in a sync/copy/move
3686 queued for being checked or transferred.
3687
3688 This can be set arbitrarily large. It will only use memory when the
3689 queue is in use. Note that it will use in the order of N kB of memory
3690 when the backlog is in use.
3691
3692 Setting this large allows rclone to calculate how many files are pend‐
3693 ing more accurately and give a more accurate estimated finish time.
3694
3695 Setting this small will make rclone more synchronous to the listings of
3696 the remote which may be desirable.
3697
3698 –max-delete=N
3699 This tells rclone not to delete more than N files. If that limit is
3700 exceeded then a fatal error will be generated and rclone will stop the
3701 operation in progress.
3702
3703 –max-depth=N
3704 This modifies the recursion depth for all the commands except purge.
3705
3706 So if you do rclone --max-depth 1 ls remote:path you will see only the
3707 files in the top level directory. Using --max-depth 2 means you will
3708 see all the files in first two directory levels and so on.
3709
3710 For historical reasons the lsd command defaults to using a --max-depth
3711 of 1 - you can override this with the command line flag.
3712
3713 You can use this command to disable recursion (with --max-depth 1).
3714
3715 Note that if you use this with sync and --delete-excluded the files not
3716 recursed through are considered excluded and will be deleted on the
3717 destination. Test first with --dry-run if you are not sure what will
3718 happen.
3719
3720 –max-transfer=SIZE
3721 Rclone will stop transferring when it has reached the size specified.
3722 Defaults to off.
3723
3724 When the limit is reached all transfers will stop immediately.
3725
3726 Rclone will exit with exit code 8 if the transfer limit is reached.
3727
3728 –modify-window=TIME
3729 When checking whether a file has been modified, this is the maximum al‐
3730 lowed time difference that a file can have and still be considered
3731 equivalent.
3732
3733 The default is 1ns unless this is overridden by a remote. For example
3734 OS X only stores modification times to the nearest second so if you are
3735 reading and writing to an OS X filing system this will be 1s by de‐
3736 fault.
3737
3738 This command line flag allows you to override that computed default.
3739
3740 –no-gzip-encoding
3741 Don't set Accept-Encoding: gzip. This means that rclone won't ask the
3742 server for compressed files automatically. Useful if you've set the
3743 server to return files with Content-Encoding: gzip but you uploaded
3744 compressed files.
3745
3746 There is no need to set this in normal operation, and doing so will de‐
3747 crease the network transfer efficiency of rclone.
3748
3749 –no-traverse
3750 The --no-traverse flag controls whether the destination file system is
3751 traversed when using the copy or move commands. --no-traverse is not
3752 compatible with sync and will be ignored if you supply it with sync.
3753
3754 If you are only copying a small number of files (or are filtering most
3755 of the files) and/or have a large number of files on the destination
3756 then --no-traverse will stop rclone listing the destination and save
3757 time.
3758
3759 However, if you are copying a large number of files, especially if you
3760 are doing a copy where lots of the files under consideration haven't
3761 changed and won't need copying then you shouldn't use --no-traverse.
3762
3763 See rclone copy (https://rclone.org/commands/rclone_copy/) for an exam‐
3764 ple of how to use it.
3765
3766 –no-update-modtime
3767 When using this flag, rclone won't update modification times of remote
3768 files if they are incorrect as it would normally.
3769
3770 This can be used if the remote is being synced with another tool also
3771 (eg the Google Drive client).
3772
3773 -P, –progress
3774 This flag makes rclone update the stats in a static block in the termi‐
3775 nal providing a realtime overview of the transfer.
3776
3777 Any log messages will scroll above the static block. Log messages will
3778 push the static block down to the bottom of the terminal where it will
3779 stay.
3780
3781 Normally this is updated every 500mS but this period can be overridden
3782 with the --stats flag.
3783
3784 This can be used with the --stats-one-line flag for a simpler display.
3785
3786 Note: On Windows untilthis bug (https://github.com/Azure/go-an‐
3787 siterm/issues/26) is fixed all non-ASCII characters will be replaced
3788 with . when --progress is in use.
3789
3790 -q, –quiet
3791 Normally rclone outputs stats and a completion message. If you set
3792 this flag it will make as little output as possible.
3793
3794 –retries int
3795 Retry the entire sync if it fails this many times it fails (default 3).
3796
3797 Some remotes can be unreliable and a few retries help pick up the files
3798 which didn't get transferred because of errors.
3799
3800 Disable retries with --retries 1.
3801
3802 –retries-sleep=TIME
3803 This sets the interval between each retry specified by --retries
3804
3805 The default is 0. Use 0 to disable.
3806
3807 –size-only
3808 Normally rclone will look at modification time and size of files to see
3809 if they are equal. If you set this flag then rclone will check only
3810 the size.
3811
3812 This can be useful transferring files from Dropbox which have been mod‐
3813 ified by the desktop sync client which doesn't set checksums of modifi‐
3814 cation times in the same way as rclone.
3815
3816 –stats=TIME
3817 Commands which transfer data (sync, copy, copyto, move, moveto) will
3818 print data transfer stats at regular intervals to show their progress.
3819
3820 This sets the interval.
3821
3822 The default is 1m. Use 0 to disable.
3823
3824 If you set the stats interval then all commands can show stats. This
3825 can be useful when running other commands, check or mount for example.
3826
3827 Stats are logged at INFO level by default which means they won't show
3828 at default log level NOTICE. Use --stats-log-level NOTICE or -v to
3829 make them show. See the Logging section for more info on log levels.
3830
3831 Note that on macOS you can send a SIGINFO (which is normally ctrl-T in
3832 the terminal) to make the stats print immediately.
3833
3834 –stats-file-name-length integer
3835 By default, the --stats output will truncate file names and paths
3836 longer than 40 characters. This is equivalent to providing
3837 --stats-file-name-length 40. Use --stats-file-name-length 0 to disable
3838 any truncation of file names printed by stats.
3839
3840 –stats-log-level string
3841 Log level to show --stats output at. This can be DEBUG, INFO, NOTICE,
3842 or ERROR. The default is INFO. This means at the default level of
3843 logging which is NOTICE the stats won't show - if you want them to then
3844 use --stats-log-level NOTICE. See the Logging section for more info on
3845 log levels.
3846
3847 –stats-one-line
3848 When this is specified, rclone condenses the stats into a single line
3849 showing the most important stats only.
3850
3851 –stats-unit=bits|bytes
3852 By default, data transfer rates will be printed in bytes/second.
3853
3854 This option allows the data rate to be printed in bits/second.
3855
3856 Data transfer volume will still be reported in bytes.
3857
3858 The rate is reported as a binary unit, not SI unit. So 1 Mbit/s equals
3859 1,048,576 bits/s and not 1,000,000 bits/s.
3860
3861 The default is bytes.
3862
3863 –suffix=SUFFIX
3864 This is for use with --backup-dir only. If this isn't set then --back‐
3865 up-dir will move files with their original name. If it is set then the
3866 files will have SUFFIX added on to them.
3867
3868 See --backup-dir for more info.
3869
3870 –suffix-keep-extension
3871 When using --suffix, setting this causes rclone put the SUFFIX before
3872 the extension of the files that it backs up rather than after.
3873
3874 So let's say we had --suffix -2019-01-01, without the flag file.txt
3875 would be backed up to file.txt-2019-01-01 and with the flag it would be
3876 backed up to file-2019-01-01.txt. This can be helpful to make sure the
3877 suffixed files can still be opened.
3878
3879 –syslog
3880 On capable OSes (not Windows or Plan9) send all log output to syslog.
3881
3882 This can be useful for running rclone in a script or rclone mount.
3883
3884 –syslog-facility string
3885 If using --syslog this sets the syslog facility (eg KERN, USER). See
3886 man syslog for a list of possible facilities. The default facility is
3887 DAEMON.
3888
3889 –tpslimit float
3890 Limit HTTP transactions per second to this. Default is 0 which is used
3891 to mean unlimited transactions per second.
3892
3893 For example to limit rclone to 10 HTTP transactions per second use
3894 --tpslimit 10, or to 1 transaction every 2 seconds use --tpslimit 0.5.
3895
3896 Use this when the number of transactions per second from rclone is
3897 causing a problem with the cloud storage provider (eg getting you
3898 banned or rate limited).
3899
3900 This can be very useful for rclone mount to control the behaviour of
3901 applications using it.
3902
3903 See also --tpslimit-burst.
3904
3905 –tpslimit-burst int
3906 Max burst of transactions for --tpslimit. (default 1)
3907
3908 Normally --tpslimit will do exactly the number of transaction per sec‐
3909 ond specified. However if you supply --tps-burst then rclone can save
3910 up some transactions from when it was idle giving a burst of up to the
3911 parameter supplied.
3912
3913 For example if you provide --tpslimit-burst 10 then if rclone has been
3914 idle for more than 10*--tpslimit then it can do 10 transactions very
3915 quickly before they are limited again.
3916
3917 This may be used to increase performance of --tpslimit without changing
3918 the long term average number of transactions per second.
3919
3920 –track-renames
3921 By default, rclone doesn't keep track of renamed files, so if you re‐
3922 name a file locally then sync it to a remote, rclone will delete the
3923 old file on the remote and upload a new copy.
3924
3925 If you use this flag, and the remote supports server side copy or serv‐
3926 er side move, and the source and destination have a compatible hash,
3927 then this will track renames during sync operations and perform renam‐
3928 ing server-side.
3929
3930 Files will be matched by size and hash - if both match then a rename
3931 will be considered.
3932
3933 If the destination does not support server-side copy or move, rclone
3934 will fall back to the default behaviour and log an error level message
3935 to the console. Note: Encrypted destinations are not supported by
3936 --track-renames.
3937
3938 Note that --track-renames is incompatible with --no-traverse and that
3939 it uses extra memory to keep track of all the rename candidates.
3940
3941 Note also that --track-renames is incompatible with --delete-before and
3942 will select --delete-after instead of --delete-during.
3943
3944 –delete-(before,during,after)
3945 This option allows you to specify when files on your destination are
3946 deleted when you sync folders.
3947
3948 Specifying the value --delete-before will delete all files present on
3949 the destination, but not on the source before starting the transfer of
3950 any new or updated files. This uses two passes through the file sys‐
3951 tems, one for the deletions and one for the copies.
3952
3953 Specifying --delete-during will delete files while checking and upload‐
3954 ing files. This is the fastest option and uses the least memory.
3955
3956 Specifying --delete-after (the default value) will delay deletion of
3957 files until all new/updated files have been successfully transferred.
3958 The files to be deleted are collected in the copy pass then deleted af‐
3959 ter the copy pass has completed successfully. The files to be deleted
3960 are held in memory so this mode may use more memory. This is the
3961 safest mode as it will only delete files if there have been no errors
3962 subsequent to that. If there have been errors before the deletions
3963 start then you will get the message not delet‐
3964 ing files as there were IO errors.
3965
3966 –fast-list
3967 When doing anything which involves a directory listing (eg sync, copy,
3968 ls - in fact nearly every command), rclone normally lists a directory
3969 and processes it before using more directory lists to process any sub‐
3970 directories. This can be parallelised and works very quickly using the
3971 least amount of memory.
3972
3973 However, some remotes have a way of listing all files beneath a direc‐
3974 tory in one (or a small number) of transactions. These tend to be the
3975 bucket based remotes (eg S3, B2, GCS, Swift, Hubic).
3976
3977 If you use the --fast-list flag then rclone will use this method for
3978 listing directories. This will have the following consequences for the
3979 listing:
3980
3981 · It will use fewer transactions (important if you pay for them)
3982
3983 · It will use more memory. Rclone has to load the whole listing into
3984 memory.
3985
3986 · It may be faster because it uses fewer transactions
3987
3988 · It may be slower because it can't be parallelized
3989
3990 rclone should always give identical results with and without
3991 --fast-list.
3992
3993 If you pay for transactions and can fit your entire sync listing into
3994 memory then --fast-list is recommended. If you have a very big sync to
3995 do then don't use --fast-list otherwise you will run out of memory.
3996
3997 If you use --fast-list on a remote which doesn't support it, then
3998 rclone will just ignore it.
3999
4000 –timeout=TIME
4001 This sets the IO idle timeout. If a transfer has started but then be‐
4002 comes idle for this long it is considered broken and disconnected.
4003
4004 The default is 5m. Set to 0 to disable.
4005
4006 –transfers=N
4007 The number of file transfers to run in parallel. It can sometimes be
4008 useful to set this to a smaller number if the remote is giving a lot of
4009 timeouts or bigger if you have lots of bandwidth and a fast remote.
4010
4011 The default is to run 4 file transfers in parallel.
4012
4013 -u, –update
4014 This forces rclone to skip any files which exist on the destination and
4015 have a modified time that is newer than the source file.
4016
4017 If an existing destination file has a modification time equal (within
4018 the computed modify window precision) to the source file's, it will be
4019 updated if the sizes are different.
4020
4021 On remotes which don't support mod time directly the time checked will
4022 be the uploaded time. This means that if uploading to one of these re‐
4023 motes, rclone will skip any files which exist on the destination and
4024 have an uploaded time that is newer than the modification time of the
4025 source file.
4026
4027 This can be useful when transferring to a remote which doesn't support
4028 mod times directly as it is more accurate than a --size-only check and
4029 faster than using --checksum.
4030
4031 –use-mmap
4032 If this flag is set then rclone will use anonymous memory allocated by
4033 mmap on Unix based platforms and VirtualAlloc on Windows for its trans‐
4034 fer buffers (size controlled by --buffer-size). Memory allocated like
4035 this does not go on the Go heap and can be returned to the OS immedi‐
4036 ately when it is finished with.
4037
4038 If this flag is not set then rclone will allocate and free the buffers
4039 using the Go memory allocator which may use more memory as memory pages
4040 are returned less aggressively to the OS.
4041
4042 It is possible this does not work well on all platforms so it is dis‐
4043 abled by default; in the future it may be enabled by default.
4044
4045 –use-server-modtime
4046 Some object-store backends (e.g, Swift, S3) do not preserve file modi‐
4047 fication times (modtime). On these backends, rclone stores the origi‐
4048 nal modtime as additional metadata on the object. By default it will
4049 make an API call to retrieve the metadata when the modtime is needed by
4050 an operation.
4051
4052 Use this flag to disable the extra API call and rely instead on the
4053 server's modified time. In cases such as a local to remote sync, know‐
4054 ing the local file is newer than the time it was last uploaded to the
4055 remote is sufficient. In those cases, this flag can speed up the
4056 process and reduce the number of API calls necessary.
4057
4058 -v, -vv, –verbose
4059 With -v rclone will tell you about each file that is transferred and a
4060 small number of significant events.
4061
4062 With -vv rclone will become very verbose telling you about every file
4063 it considers and transfers. Please send bug reports with a log with
4064 this setting.
4065
4066 -V, –version
4067 Prints the version number
4068
4069 SSL/TLS options
4070 The outoing SSL/TLS connections rclone makes can be controlled with
4071 these options. For example this can be very useful with the HTTP or
4072 WebDAV backends. Rclone HTTP servers have their own set of configura‐
4073 tion for SSL/TLS which you can find in their documentation.
4074
4075 –ca-cert string
4076 This loads the PEM encoded certificate authority certificate and uses
4077 it to verify the certificates of the servers rclone connects to.
4078
4079 If you have generated certificates signed with a local CA then you will
4080 need this flag to connect to servers using those certificates.
4081
4082 –client-cert string
4083 This loads the PEM encoded client side certificate.
4084
4085 This is used for mutual TLS authentication
4086 (https://en.wikipedia.org/wiki/Mutual_authentication).
4087
4088 The --client-key flag is required too when using this.
4089
4090 –client-key string
4091 This loads the PEM encoded client side private key used for mutual TLS
4092 authentication. Used in conjunction with --client-cert.
4093
4094 –no-check-certificate=true/false
4095 --no-check-certificate controls whether a client verifies the server's
4096 certificate chain and host name. If --no-check-certificate is true,
4097 TLS accepts any certificate presented by the server and any host name
4098 in that certificate. In this mode, TLS is susceptible to
4099 man-in-the-middle attacks.
4100
4101 This option defaults to false.
4102
4103 This should be used only for testing.
4104
4105 Configuration Encryption
4106 Your configuration file contains information for logging in to your
4107 cloud services. This means that you should keep your .rclone.conf file
4108 in a secure location.
4109
4110 If you are in an environment where that isn't possible, you can add a
4111 password to your configuration. This means that you will have to enter
4112 the password every time you start rclone.
4113
4114 To add a password to your rclone configuration, execute rclone config.
4115
4116 >rclone config
4117 Current remotes:
4118
4119 e) Edit existing remote
4120 n) New remote
4121 d) Delete remote
4122 s) Set configuration password
4123 q) Quit config
4124 e/n/d/s/q>
4125
4126 Go into s, Set configuration password:
4127
4128 e/n/d/s/q> s
4129 Your configuration is not encrypted.
4130 If you add a password, you will protect your login information to cloud services.
4131 a) Add Password
4132 q) Quit to main menu
4133 a/q> a
4134 Enter NEW configuration password:
4135 password:
4136 Confirm NEW password:
4137 password:
4138 Password set
4139 Your configuration is encrypted.
4140 c) Change Password
4141 u) Unencrypt configuration
4142 q) Quit to main menu
4143 c/u/q>
4144
4145 Your configuration is now encrypted, and every time you start rclone
4146 you will now be asked for the password. In the same menu, you can
4147 change the password or completely remove encryption from your configu‐
4148 ration.
4149
4150 There is no way to recover the configuration if you lose your password.
4151
4152 rclone uses nacl secretbox (https://godoc.org/golang.org/x/crypto/na‐
4153 cl/secretbox) which in turn uses XSalsa20 and Poly1305 to encrypt and
4154 authenticate your configuration with secret-key cryptography. The
4155 password is SHA-256 hashed, which produces the key for secretbox. The
4156 hashed password is not stored.
4157
4158 While this provides very good security, we do not recommend storing
4159 your encrypted rclone configuration in public if it contains sensitive
4160 information, maybe except if you use a very strong password.
4161
4162 If it is safe in your environment, you can set the RCLONE_CONFIG_PASS
4163 environment variable to contain your password, in which case it will be
4164 used for decrypting the configuration.
4165
4166 You can set this for a session from a script. For unix like systems
4167 save this to a file called set-rclone-password:
4168
4169 #!/bin/echo Source this file don't run it
4170
4171 read -s RCLONE_CONFIG_PASS
4172 export RCLONE_CONFIG_PASS
4173
4174 Then source the file when you want to use it. From the shell you would
4175 do source set-rclone-password. It will then ask you for the password
4176 and set it in the environment variable.
4177
4178 If you are running rclone inside a script, you might want to disable
4179 password prompts. To do that, pass the parameter --ask-password=false
4180 to rclone. This will make rclone fail instead of asking for a password
4181 if RCLONE_CONFIG_PASS doesn't contain a valid password.
4182
4183 Developer options
4184 These options are useful when developing or debugging rclone. There
4185 are also some more remote specific options which aren't documented here
4186 which are used for testing. These start with remote name eg
4187 --drive-test-option - see the docs for the remote in question.
4188
4189 –cpuprofile=FILE
4190 Write CPU profile to file. This can be analysed with go tool pprof.
4191
4192 –dump flag,flag,flag
4193 The --dump flag takes a comma separated list of flags to dump info
4194 about. These are:
4195
4196 –dump headers
4197 Dump HTTP headers with Authorization: lines removed. May still contain
4198 sensitive info. Can be very verbose. Useful for debugging only.
4199
4200 Use --dump auth if you do want the Authorization: headers.
4201
4202 –dump bodies
4203 Dump HTTP headers and bodies - may contain sensitive info. Can be very
4204 verbose. Useful for debugging only.
4205
4206 Note that the bodies are buffered in memory so don't use this for enor‐
4207 mous files.
4208
4209 –dump requests
4210 Like --dump bodies but dumps the request bodies and the response head‐
4211 ers. Useful for debugging download problems.
4212
4213 –dump responses
4214 Like --dump bodies but dumps the response bodies and the request head‐
4215 ers. Useful for debugging upload problems.
4216
4217 –dump auth
4218 Dump HTTP headers - will contain sensitive info such as Authorization:
4219 headers - use --dump headers to dump without Authorization: headers.
4220 Can be very verbose. Useful for debugging only.
4221
4222 –dump filters
4223 Dump the filters to the output. Useful to see exactly what include and
4224 exclude options are filtering on.
4225
4226 –dump goroutines
4227 This dumps a list of the running go-routines at the end of the command
4228 to standard output.
4229
4230 –dump openfiles
4231 This dumps a list of the open files at the end of the command. It uses
4232 the lsof command to do that so you'll need that installed to use it.
4233
4234 –memprofile=FILE
4235 Write memory profile to file. This can be analysed with go tool pprof.
4236
4237 Filtering
4238 For the filtering options
4239
4240 · --delete-excluded
4241
4242 · --filter
4243
4244 · --filter-from
4245
4246 · --exclude
4247
4248 · --exclude-from
4249
4250 · --include
4251
4252 · --include-from
4253
4254 · --files-from
4255
4256 · --min-size
4257
4258 · --max-size
4259
4260 · --min-age
4261
4262 · --max-age
4263
4264 · --dump filters
4265
4266 See the filtering section (https://rclone.org/filtering/).
4267
4268 Remote control
4269 For the remote control options and for instructions on how to remote
4270 control rclone
4271
4272 · --rc
4273
4274 · and anything starting with --rc-
4275
4276 See the remote control section (https://rclone.org/rc/).
4277
4278 Logging
4279 rclone has 4 levels of logging, ERROR, NOTICE, INFO and DEBUG.
4280
4281 By default, rclone logs to standard error. This means you can redirect
4282 standard error and still see the normal output of rclone commands (eg
4283 rclone ls).
4284
4285 By default, rclone will produce Error and Notice level messages.
4286
4287 If you use the -q flag, rclone will only produce Error messages.
4288
4289 If you use the -v flag, rclone will produce Error, Notice and Info mes‐
4290 sages.
4291
4292 If you use the -vv flag, rclone will produce Error, Notice, Info and
4293 Debug messages.
4294
4295 You can also control the log levels with the --log-level flag.
4296
4297 If you use the --log-file=FILE option, rclone will redirect Error, Info
4298 and Debug messages along with standard error to FILE.
4299
4300 If you use the --syslog flag then rclone will log to syslog and the
4301 --syslog-facility control which facility it uses.
4302
4303 Rclone prefixes all log messages with their level in capitals, eg INFO
4304 which makes it easy to grep the log file for different kinds of infor‐
4305 mation.
4306
4307 Exit Code
4308 If any errors occur during the command execution, rclone will exit with
4309 a non-zero exit code. This allows scripts to detect when rclone opera‐
4310 tions have failed.
4311
4312 During the startup phase, rclone will exit immediately if an error is
4313 detected in the configuration. There will always be a log message im‐
4314 mediately before exiting.
4315
4316 When rclone is running it will accumulate errors as it goes along, and
4317 only exit with a non-zero exit code if (after retries) there were still
4318 failed transfers. For every error counted there will be a high priori‐
4319 ty log message (visible with -q) showing the message and which file
4320 caused the problem. A high priority message is also shown when start‐
4321 ing a retry so the user can see that any previous error messages may
4322 not be valid after the retry. If rclone has done a retry it will log a
4323 high priority message if the retry was successful.
4324
4325 List of exit codes
4326 · 0 - success
4327
4328 · 1 - Syntax or usage error
4329
4330 · 2 - Error not otherwise categorised
4331
4332 · 3 - Directory not found
4333
4334 · 4 - File not found
4335
4336 · 5 - Temporary error (one that more retries might fix) (Retry errors)
4337
4338 · 6 - Less serious errors (like 461 errors from dropbox) (NoRetry er‐
4339 rors)
4340
4341 · 7 - Fatal error (one that more retries won't fix, like account sus‐
4342 pended) (Fatal errors)
4343
4344 · 8 - Transfer exceeded - limit set by –max-transfer reached
4345
4346 Environment Variables
4347 Rclone can be configured entirely using environment variables. These
4348 can be used to set defaults for options or config file entries.
4349
4350 Options
4351 Every option in rclone can have its default set by environment vari‐
4352 able.
4353
4354 To find the name of the environment variable, first, take the long op‐
4355 tion name, strip the leading --, change - to _, make upper case and
4356 prepend RCLONE_.
4357
4358 For example, to always set --stats 5s, set the environment variable
4359 RCLONE_STATS=5s. If you set stats on the command line this will over‐
4360 ride the environment variable setting.
4361
4362 Or to always use the trash in drive --drive-use-trash, set
4363 RCLONE_DRIVE_USE_TRASH=true.
4364
4365 The same parser is used for the options and the environment variables
4366 so they take exactly the same form.
4367
4368 Config file
4369 You can set defaults for values in the config file on an individual re‐
4370 mote basis. If you want to use this feature, you will need to discover
4371 the name of the config items that you want. The easiest way is to run
4372 through rclone config by hand, then look in the config file to see what
4373 the values are (the config file can be found by looking at the help for
4374 --config in rclone help).
4375
4376 To find the name of the environment variable, you need to set, take
4377 RCLONE_CONFIG_ + name of remote + _ + name of config file option and
4378 make it all uppercase.
4379
4380 For example, to configure an S3 remote named mys3: without a config
4381 file (using unix ways of setting environment variables):
4382
4383 $ export RCLONE_CONFIG_MYS3_TYPE=s3
4384 $ export RCLONE_CONFIG_MYS3_ACCESS_KEY_ID=XXX
4385 $ export RCLONE_CONFIG_MYS3_SECRET_ACCESS_KEY=XXX
4386 $ rclone lsd MYS3:
4387 -1 2016-09-21 12:54:21 -1 my-bucket
4388 $ rclone listremotes | grep mys3
4389 mys3:
4390
4391 Note that if you want to create a remote using environment variables
4392 you must create the ..._TYPE variable as above.
4393
4394 Other environment variables
4395 · RCLONE_CONFIG_PASS` set to contain your config file password (see
4396 Configuration Encryption section)
4397
4398 · HTTP_PROXY, HTTPS_PROXY and NO_PROXY (or the lowercase versions
4399 thereof).
4400
4401 · HTTPS_PROXY takes precedence over HTTP_PROXY for https requests.
4402
4403 · The environment values may be either a complete URL or a
4404 “host[:port]” for, in which case the “http” scheme is assumed.
4405
4407 Some of the configurations (those involving oauth2) require an Internet
4408 connected web browser.
4409
4410 If you are trying to set rclone up on a remote or headless box with no
4411 browser available on it (eg a NAS or a server in a datacenter) then you
4412 will need to use an alternative means of configuration. There are two
4413 ways of doing it, described below.
4414
4415 Configuring using rclone authorize
4416 On the headless box
4417
4418 ...
4419 Remote config
4420 Use auto config?
4421 * Say Y if not sure
4422 * Say N if you are working on a remote or headless machine
4423 y) Yes
4424 n) No
4425 y/n> n
4426 For this to work, you will need rclone available on a machine that has a web browser available.
4427 Execute the following on your machine:
4428 rclone authorize "amazon cloud drive"
4429 Then paste the result below:
4430 result>
4431
4432 Then on your main desktop machine
4433
4434 rclone authorize "amazon cloud drive"
4435 If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
4436 Log in and authorize rclone for access
4437 Waiting for code...
4438 Got code
4439 Paste the following into your remote machine --->
4440 SECRET_TOKEN
4441 <---End paste
4442
4443 Then back to the headless box, paste in the code
4444
4445 result> SECRET_TOKEN
4446 --------------------
4447 [acd12]
4448 client_id =
4449 client_secret =
4450 token = SECRET_TOKEN
4451 --------------------
4452 y) Yes this is OK
4453 e) Edit this remote
4454 d) Delete this remote
4455 y/e/d>
4456
4457 Configuring by copying the config file
4458 Rclone stores all of its config in a single configuration file. This
4459 can easily be copied to configure a remote rclone.
4460
4461 So first configure rclone on your desktop machine
4462
4463 rclone config
4464
4465 to set up the config file.
4466
4467 Find the config file by running rclone config file, for example
4468
4469 $ rclone config file
4470 Configuration file is stored at:
4471 /home/user/.rclone.conf
4472
4473 Now transfer it to the remote box (scp, cut paste, ftp, sftp etc) and
4474 place it in the correct place (use rclone config file on the remote box
4475 to find out where).
4476
4478 Rclone has a sophisticated set of include and exclude rules. Some of
4479 these are based on patterns and some on other things like file size.
4480
4481 The filters are applied for the copy, sync, move, ls, lsl, md5sum,
4482 sha1sum, size, delete and check operations. Note that purge does not
4483 obey the filters.
4484
4485 Each path as it passes through rclone is matched against the include
4486 and exclude rules like --include, --exclude, --include-from, --ex‐
4487 clude-from, --filter, or --filter-from. The simplest way to try them
4488 out is using the ls command, or --dry-run together with -v.
4489
4490 Patterns
4491 The patterns used to match files for inclusion or exclusion are based
4492 on “file globs” as used by the unix shell.
4493
4494 If the pattern starts with a / then it only matches at the top level of
4495 the directory tree, relative to the root of the remote (not necessarily
4496 the root of the local drive). If it doesn't start with / then it is
4497 matched starting at the end of the path, but it will only match a com‐
4498 plete path element:
4499
4500 file.jpg - matches "file.jpg"
4501 - matches "directory/file.jpg"
4502 - doesn't match "afile.jpg"
4503 - doesn't match "directory/afile.jpg"
4504 /file.jpg - matches "file.jpg" in the root directory of the remote
4505 - doesn't match "afile.jpg"
4506 - doesn't match "directory/file.jpg"
4507
4508 Important Note that you must use / in patterns and not \ even if run‐
4509 ning on Windows.
4510
4511 A * matches anything but not a /.
4512
4513 *.jpg - matches "file.jpg"
4514 - matches "directory/file.jpg"
4515 - doesn't match "file.jpg/something"
4516
4517 Use ** to match anything, including slashes (/).
4518
4519 dir/** - matches "dir/file.jpg"
4520 - matches "dir/dir1/dir2/file.jpg"
4521 - doesn't match "directory/file.jpg"
4522 - doesn't match "adir/file.jpg"
4523
4524 A ? matches any character except a slash /.
4525
4526 l?ss - matches "less"
4527 - matches "lass"
4528 - doesn't match "floss"
4529
4530 A [ and ] together make a character class, such as [a-z] or [aeiou] or
4531 [[:alpha:]]. See the go regexp docs (https://golang.org/pkg/reg‐
4532 exp/syntax/) for more info on these.
4533
4534 h[ae]llo - matches "hello"
4535 - matches "hallo"
4536 - doesn't match "hullo"
4537
4538 A { and } define a choice between elements. It should contain a comma
4539 separated list of patterns, any of which might match. These patterns
4540 can contain wildcards.
4541
4542 {one,two}_potato - matches "one_potato"
4543 - matches "two_potato"
4544 - doesn't match "three_potato"
4545 - doesn't match "_potato"
4546
4547 Special characters can be escaped with a \ before them.
4548
4549 \*.jpg - matches "*.jpg"
4550 \\.jpg - matches "\.jpg"
4551 \[one\].jpg - matches "[one].jpg"
4552
4553 Patterns are case sensitive unless the --ignore-case flag is used.
4554
4555 Without --ignore-case (default)
4556
4557 potato - matches "potato"
4558 - doesn't match "POTATO"
4559
4560 With --ignore-case
4561
4562 potato - matches "potato"
4563 - matches "POTATO"
4564
4565 Note also that rclone filter globs can only be used in one of the fil‐
4566 ter command line flags, not in the specification of the remote, so
4567 rclone copy "remote:dir*.jpg" /path/to/dir won't work - what is re‐
4568 quired is rclone --include "*.jpg" copy remote:dir /path/to/dir
4569
4570 Directories
4571 Rclone keeps track of directories that could match any file patterns.
4572
4573 Eg if you add the include rule
4574
4575 /a/*.jpg
4576
4577 Rclone will synthesize the directory include rule
4578
4579 /a/
4580
4581 If you put any rules which end in / then it will only match directo‐
4582 ries.
4583
4584 Directory matches are only used to optimise directory access patterns -
4585 you must still match the files that you want to match. Directory
4586 matches won't optimise anything on bucket based remotes (eg s3, swift,
4587 google compute storage, b2) which don't have a concept of directory.
4588
4589 Differences between rsync and rclone patterns
4590 Rclone implements bash style {a,b,c} glob matching which rsync doesn't.
4591
4592 Rclone always does a wildcard match so \ must always escape a \.
4593
4594 How the rules are used
4595 Rclone maintains a combined list of include rules and exclude rules.
4596
4597 Each file is matched in order, starting from the top, against the rule
4598 in the list until it finds a match. The file is then included or ex‐
4599 cluded according to the rule type.
4600
4601 If the matcher fails to find a match after testing against all the en‐
4602 tries in the list then the path is included.
4603
4604 For example given the following rules, + being include, - being ex‐
4605 clude,
4606
4607 - secret*.jpg
4608 + *.jpg
4609 + *.png
4610 + file2.avi
4611 - *
4612
4613 This would include
4614
4615 · file1.jpg
4616
4617 · file3.png
4618
4619 · file2.avi
4620
4621 This would exclude
4622
4623 · secret17.jpg
4624
4625 · non *.jpg and *.png
4626
4627 A similar process is done on directory entries before recursing into
4628 them. This only works on remotes which have a concept of directory (Eg
4629 local, google drive, onedrive, amazon drive) and not on bucket based
4630 remotes (eg s3, swift, google compute storage, b2).
4631
4632 Adding filtering rules
4633 Filtering rules are added with the following command line flags.
4634
4635 Repeating options
4636 You can repeat the following options to add more than one rule of that
4637 type.
4638
4639 · --include
4640
4641 · --include-from
4642
4643 · --exclude
4644
4645 · --exclude-from
4646
4647 · --filter
4648
4649 · --filter-from
4650
4651 Important You should not use --include* together with --exclude*. It
4652 may produce different results than you expected. In that case try to
4653 use: --filter*.
4654
4655 Note that all the options of the same type are processed together in
4656 the order above, regardless of what order they were placed on the com‐
4657 mand line.
4658
4659 So all --include options are processed first in the order they appeared
4660 on the command line, then all --include-from options etc.
4661
4662 To mix up the order includes and excludes, the --filter flag can be
4663 used.
4664
4665 --exclude - Exclude files matching pattern
4666 Add a single exclude rule with --exclude.
4667
4668 This flag can be repeated. See above for the order the flags are pro‐
4669 cessed in.
4670
4671 Eg --exclude *.bak to exclude all bak files from the sync.
4672
4673 --exclude-from - Read exclude patterns from file
4674 Add exclude rules from a file.
4675
4676 This flag can be repeated. See above for the order the flags are pro‐
4677 cessed in.
4678
4679 Prepare a file like this exclude-file.txt
4680
4681 # a sample exclude rule file
4682 *.bak
4683 file2.jpg
4684
4685 Then use as --exclude-from exclude-file.txt. This will sync all files
4686 except those ending in bak and file2.jpg.
4687
4688 This is useful if you have a lot of rules.
4689
4690 --include - Include files matching pattern
4691 Add a single include rule with --include.
4692
4693 This flag can be repeated. See above for the order the flags are pro‐
4694 cessed in.
4695
4696 Eg --include *.{png,jpg} to include all png and jpg files in the backup
4697 and no others.
4698
4699 This adds an implicit --exclude * at the very end of the filter list.
4700 This means you can mix --include and --include-from with the other fil‐
4701 ters (eg --exclude) but you must include all the files you want in the
4702 include statement. If this doesn't provide enough flexibility then you
4703 must use --filter-from.
4704
4705 --include-from - Read include patterns from file
4706 Add include rules from a file.
4707
4708 This flag can be repeated. See above for the order the flags are pro‐
4709 cessed in.
4710
4711 Prepare a file like this include-file.txt
4712
4713 # a sample include rule file
4714 *.jpg
4715 *.png
4716 file2.avi
4717
4718 Then use as --include-from include-file.txt. This will sync all jpg,
4719 png files and file2.avi.
4720
4721 This is useful if you have a lot of rules.
4722
4723 This adds an implicit --exclude * at the very end of the filter list.
4724 This means you can mix --include and --include-from with the other fil‐
4725 ters (eg --exclude) but you must include all the files you want in the
4726 include statement. If this doesn't provide enough flexibility then you
4727 must use --filter-from.
4728
4729 --filter - Add a file-filtering rule
4730 This can be used to add a single include or exclude rule. Include
4731 rules start with + and exclude rules start with -. A special rule
4732 called ! can be used to clear the existing rules.
4733
4734 This flag can be repeated. See above for the order the flags are pro‐
4735 cessed in.
4736
4737 Eg --filter "- *.bak" to exclude all bak files from the sync.
4738
4739 --filter-from - Read filtering patterns from a file
4740 Add include/exclude rules from a file.
4741
4742 This flag can be repeated. See above for the order the flags are pro‐
4743 cessed in.
4744
4745 Prepare a file like this filter-file.txt
4746
4747 # a sample filter rule file
4748 - secret*.jpg
4749 + *.jpg
4750 + *.png
4751 + file2.avi
4752 - /dir/Trash/**
4753 + /dir/**
4754 # exclude everything else
4755 - *
4756
4757 Then use as --filter-from filter-file.txt. The rules are processed in
4758 the order that they are defined.
4759
4760 This example will include all jpg and png files, exclude any files
4761 matching secret*.jpg and include file2.avi. It will also include ev‐
4762 erything in the directory dir at the root of the sync, except dir/Trash
4763 which it will exclude. Everything else will be excluded from the sync.
4764
4765 --files-from - Read list of source-file names
4766 This reads a list of file names from the file passed in and only these
4767 files are transferred. The filtering rules are ignored completely if
4768 you use this option.
4769
4770 Rclone will traverse the file system if you use --files-from, effec‐
4771 tively using the files in --files-from as a set of filters. Rclone
4772 will not error if any of the files are missing.
4773
4774 If you use --no-traverse as well as --files-from then rclone will not
4775 traverse the destination file system, it will find each file individu‐
4776 ally using approximately 1 API call. This can be more efficient for
4777 small lists of files.
4778
4779 This option can be repeated to read from more than one file. These are
4780 read in the order that they are placed on the command line.
4781
4782 Paths within the --files-from file will be interpreted as starting with
4783 the root specified in the command. Leading / characters are ignored.
4784
4785 For example, suppose you had files-from.txt with this content:
4786
4787 # comment
4788 file1.jpg
4789 subdir/file2.jpg
4790
4791 You could then use it like this:
4792
4793 rclone copy --files-from files-from.txt /home/me/pics remote:pics
4794
4795 This will transfer these files only (if they exist)
4796
4797 /home/me/pics/file1.jpg → remote:pics/file1.jpg
4798 /home/me/pics/subdir/file2.jpg → remote:pics/subdirfile1.jpg
4799
4800 To take a more complicated example, let's say you had a few files you
4801 want to back up regularly with these absolute paths:
4802
4803 /home/user1/important
4804 /home/user1/dir/file
4805 /home/user2/stuff
4806
4807 To copy these you'd find a common subdirectory - in this case /home and
4808 put the remaining files in files-from.txt with or without leading /, eg
4809
4810 user1/important
4811 user1/dir/file
4812 user2/stuff
4813
4814 You could then copy these to a remote like this
4815
4816 rclone copy --files-from files-from.txt /home remote:backup
4817
4818 The 3 files will arrive in remote:backup with the paths as in the
4819 files-from.txt like this:
4820
4821 /home/user1/important → remote:backup/user1/important
4822 /home/user1/dir/file → remote:backup/user1/dir/file
4823 /home/user2/stuff → remote:backup/stuff
4824
4825 You could of course choose / as the root too in which case your
4826 files-from.txt might look like this.
4827
4828 /home/user1/important
4829 /home/user1/dir/file
4830 /home/user2/stuff
4831
4832 And you would transfer it like this
4833
4834 rclone copy --files-from files-from.txt / remote:backup
4835
4836 In this case there will be an extra home directory on the remote:
4837
4838 /home/user1/important → remote:home/backup/user1/important
4839 /home/user1/dir/file → remote:home/backup/user1/dir/file
4840 /home/user2/stuff → remote:home/backup/stuff
4841
4842 --min-size - Don't transfer any file smaller than this
4843 This option controls the minimum size file which will be transferred.
4844 This defaults to kBytes but a suffix of k, M, or G can be used.
4845
4846 For example --min-size 50k means no files smaller than 50kByte will be
4847 transferred.
4848
4849 --max-size - Don't transfer any file larger than this
4850 This option controls the maximum size file which will be transferred.
4851 This defaults to kBytes but a suffix of k, M, or G can be used.
4852
4853 For example --max-size 1G means no files larger than 1GByte will be
4854 transferred.
4855
4856 --max-age - Don't transfer any file older than this
4857 This option controls the maximum age of files to transfer. Give in
4858 seconds or with a suffix of:
4859
4860 · ms - Milliseconds
4861
4862 · s - Seconds
4863
4864 · m - Minutes
4865
4866 · h - Hours
4867
4868 · d - Days
4869
4870 · w - Weeks
4871
4872 · M - Months
4873
4874 · y - Years
4875
4876 For example --max-age 2d means no files older than 2 days will be
4877 transferred.
4878
4879 --min-age - Don't transfer any file younger than this
4880 This option controls the minimum age of files to transfer. Give in
4881 seconds or with a suffix (see --max-age for list of suffixes)
4882
4883 For example --min-age 2d means no files younger than 2 days will be
4884 transferred.
4885
4886 --delete-excluded - Delete files on dest excluded from
4887 sync
4888
4889 Important this flag is dangerous - use with --dry-run and -v first.
4890
4891 When doing rclone sync this will delete any files which are excluded
4892 from the sync on the destination.
4893
4894 If for example you did a sync from A to B without the --min-size 50k
4895 flag
4896
4897 rclone sync A: B:
4898
4899 Then you repeated it like this with the --delete-excluded
4900
4901 rclone --min-size 50k --delete-excluded sync A: B:
4902
4903 This would delete all files on B which are less than 50 kBytes as these
4904 are now excluded from the sync.
4905
4906 Always test first with --dry-run and -v before using this flag.
4907
4908 --dump filters - dump the filters to the output
4909 This dumps the defined filters to the output as regular expressions.
4910
4911 Useful for debugging.
4912
4913 --ignore-case - make searches case insensitive
4914 Normally filter patterns are case sensitive. If this flag is supplied
4915 then filter patterns become case insensitive.
4916
4917 Normally a --include "file.txt" will not match a file called FILE.txt.
4918 However if you use the --ignore-case flag then --include "file.txt"
4919 this will match a file called FILE.txt.
4920
4921 Quoting shell metacharacters
4922 The examples above may not work verbatim in your shell as they have
4923 shell metacharacters in them (eg *), and may require quoting.
4924
4925 Eg linux, OSX
4926
4927 · --include \*.jpg
4928
4929 · --include '*.jpg'
4930
4931 · --include='*.jpg'
4932
4933 In Windows the expansion is done by the command not the shell so this
4934 should work fine
4935
4936 · --include *.jpg
4937
4938 Exclude directory based on a file
4939 It is possible to exclude a directory based on a file, which is present
4940 in this directory. Filename should be specified using the --ex‐
4941 clude-if-present flag. This flag has a priority over the other filter‐
4942 ing flags.
4943
4944 Imagine, you have the following directory structure:
4945
4946 dir1/file1
4947 dir1/dir2/file2
4948 dir1/dir2/dir3/file3
4949 dir1/dir2/dir3/.ignore
4950
4951 You can exclude dir3 from sync by running the following command:
4952
4953 rclone sync --exclude-if-present .ignore dir1 remote:backup
4954
4955 Currently only one filename is supported, i.e. --exclude-if-present
4956 should not be used multiple times.
4957
4959 If rclone is run with the --rc flag then it starts an http server which
4960 can be used to remote control rclone.
4961
4962 If you just want to run a remote control then see the rcd command
4963 (https://rclone.org/commands/rclone_rcd/).
4964
4965 NB this is experimental and everything here is subject to change!
4966
4967 Supported parameters
4968 –rc
4969 Flag to start the http server listen on remote requests
4970
4971 –rc-addr=IP
4972 IPaddress:Port or :Port to bind server to. (default “localhost:5572”)
4973
4974 –rc-cert=KEY
4975 SSL PEM key (concatenation of certificate and CA certificate)
4976
4977 –rc-client-ca=PATH
4978 Client certificate authority to verify clients with
4979
4980 –rc-htpasswd=PATH
4981 htpasswd file - if not provided no authentication is done
4982
4983 –rc-key=PATH
4984 SSL PEM Private key
4985
4986 –rc-max-header-bytes=VALUE
4987 Maximum size of request header (default 4096)
4988
4989 –rc-user=VALUE
4990 User name for authentication.
4991
4992 –rc-pass=VALUE
4993 Password for authentication.
4994
4995 –rc-realm=VALUE
4996 Realm for authentication (default “rclone”)
4997
4998 –rc-server-read-timeout=DURATION
4999 Timeout for server reading data (default 1h0m0s)
5000
5001 –rc-server-write-timeout=DURATION
5002 Timeout for server writing data (default 1h0m0s)
5003
5004 –rc-serve
5005 Enable the serving of remote objects via the HTTP interface. This
5006 means objects will be accessible at http://127.0.0.1:5572/ by default,
5007 so you can browse to http://127.0.0.1:5572/ or http://127.0.0.1:5572/*
5008 to see a listing of the remotes. Objects may be requested from remotes
5009 using this syntax http://127.0.0.1:5572/[remote:path]/path/to/object
5010
5011 Default Off.
5012
5013 –rc-files /path/to/directory
5014 Path to local files to serve on the HTTP server.
5015
5016 If this is set then rclone will serve the files in that directory. It
5017 will also open the root in the web browser if specified. This is for
5018 implementing browser based GUIs for rclone functions.
5019
5020 If --rc-user or --rc-pass is set then the URL that is opened will have
5021 the authorization in the URL in the http://user:pass@localhost/ style.
5022
5023 Default Off.
5024
5025 –rc-no-auth
5026 By default rclone will require authorisation to have been set up on the
5027 rc interface in order to use any methods which access any rclone re‐
5028 motes. Eg operations/list is denied as it involved creating a remote
5029 as is sync/copy.
5030
5031 If this is set then no authorisation will be required on the server to
5032 use these methods. The alternative is to use --rc-user and --rc-pass
5033 and use these credentials in the request.
5034
5035 Default Off.
5036
5037 Accessing the remote control via the rclone rc command
5038 Rclone itself implements the remote control protocol in its rclone rc
5039 command.
5040
5041 You can use it like this
5042
5043 $ rclone rc rc/noop param1=one param2=two
5044 {
5045 "param1": "one",
5046 "param2": "two"
5047 }
5048
5049 Run rclone rc on its own to see the help for the installed remote con‐
5050 trol commands.
5051
5052 rclone rc also supports a --json flag which can be used to send more
5053 complicated input parameters.
5054
5055 $ rclone rc --json '{ "p1": [1,"2",null,4], "p2": { "a":1, "b":2 } }' rc/noop
5056 {
5057 "p1": [
5058 1,
5059 "2",
5060 null,
5061 4
5062 ],
5063 "p2": {
5064 "a": 1,
5065 "b": 2
5066 }
5067 }
5068
5069 Special parameters
5070 The rc interface supports some special parameters which apply to all
5071 commands. These start with _ to show they are different.
5072
5073 Running asynchronous jobs with _async = true
5074 If _async has a true value when supplied to an rc call then it will re‐
5075 turn immediately with a job id and the task will be run in the back‐
5076 ground. The job/status call can be used to get information of the
5077 background job. The job can be queried for up to 1 minute after it has
5078 finished.
5079
5080 It is recommended that potentially long running jobs, eg sync/sync,
5081 sync/copy, sync/move, operations/purge are run with the _async flag to
5082 avoid any potential problems with the HTTP request and response timing
5083 out.
5084
5085 Starting a job with the _async flag:
5086
5087 $ rclone rc --json '{ "p1": [1,"2",null,4], "p2": { "a":1, "b":2 }, "_async": true }' rc/noop
5088 {
5089 "jobid": 2
5090 }
5091
5092 Query the status to see if the job has finished. For more information
5093 on the meaning of these return parameters see the job/status call.
5094
5095 $ rclone rc --json '{ "jobid":2 }' job/status
5096 {
5097 "duration": 0.000124163,
5098 "endTime": "2018-10-27T11:38:07.911245881+01:00",
5099 "error": "",
5100 "finished": true,
5101 "id": 2,
5102 "output": {
5103 "_async": true,
5104 "p1": [
5105 1,
5106 "2",
5107 null,
5108 4
5109 ],
5110 "p2": {
5111 "a": 1,
5112 "b": 2
5113 }
5114 },
5115 "startTime": "2018-10-27T11:38:07.911121728+01:00",
5116 "success": true
5117 }
5118
5119 job/list can be used to show the running or recently completed jobs
5120
5121 $ rclone rc job/list
5122 {
5123 "jobids": [
5124 2
5125 ]
5126 }
5127
5128 Supported commands
5129 cache/expire: Purge a remote from cache
5130 Purge a remote from the cache backend. Supports either a directory or
5131 a file. Params: - remote = path to remote (required) - withData =
5132 true/false to delete cached data (chunks) as well (optional)
5133
5134 Eg
5135
5136 rclone rc cache/expire remote=path/to/sub/folder/
5137 rclone rc cache/expire remote=/ withData=true
5138
5139 cache/fetch: Fetch file chunks
5140 Ensure the specified file chunks are cached on disk.
5141
5142 The chunks= parameter specifies the file chunks to check. It takes a
5143 comma separated list of array slice indices. The slice indices are
5144 similar to Python slices: start[:end]
5145
5146 start is the 0 based chunk number from the beginning of the file to
5147 fetch inclusive. end is 0 based chunk number from the beginning of the
5148 file to fetch exclusive. Both values can be negative, in which case
5149 they count from the back of the file. The value “-5:” represents the
5150 last 5 chunks of a file.
5151
5152 Some valid examples are: “:5,-5:” -> the first and last five chunks
5153 “0,-2” -> the first and the second last chunk “0:10” -> the first ten
5154 chunks
5155
5156 Any parameter with a key that starts with “file” can be used to specify
5157 files to fetch, eg
5158
5159 rclone rc cache/fetch chunks=0 file=hello file2=home/goodbye
5160
5161 File names will automatically be encrypted when the a crypt remote is
5162 used on top of the cache.
5163
5164 cache/stats: Get cache stats
5165 Show statistics for the cache remote.
5166
5167 config/create: create the config for a remote.
5168 This takes the following parameters
5169
5170 · name - name of remote
5171
5172 · type - type of new remote
5173
5174 · type - type of the new remote
5175
5176 See the config create command (https://rclone.org/commands/rclone_con‐
5177 fig_create/) command for more information on the above.
5178
5179 Authentication is required for this call.
5180
5181 config/delete: Delete a remote in the config file.
5182 Parameters: - name - name of remote to delete
5183
5184 See the config delete command (https://rclone.org/commands/rclone_con‐
5185 fig_delete/) command for more information on the above.
5186
5187 Authentication is required for this call.
5188
5189 config/dump: Dumps the config file.
5190 Returns a JSON object: - key: value
5191
5192 Where keys are remote names and values are the config parameters.
5193
5194 See the config dump command (https://rclone.org/commands/rclone_con‐
5195 fig_dump/) command for more information on the above.
5196
5197 Authentication is required for this call.
5198
5199 config/get: Get a remote in the config file.
5200 Parameters: - name - name of remote to get
5201
5202 See the config dump command (https://rclone.org/commands/rclone_con‐
5203 fig_dump/) command for more information on the above.
5204
5205 Authentication is required for this call.
5206
5207 config/listremotes: Lists the remotes in the config file.
5208 Returns - remotes - array of remote names
5209
5210 See the listremotes command (https://rclone.org/com‐
5211 mands/rclone_listremotes/) command for more information on the above.
5212
5213 Authentication is required for this call.
5214
5215 config/password: password the config for a remote.
5216 This takes the following parameters
5217
5218 · name - name of remote
5219
5220 · type - type of new remote
5221
5222 See the config password command (https://rclone.org/com‐
5223 mands/rclone_config_password/) command for more information on the
5224 above.
5225
5226 Authentication is required for this call.
5227
5228 config/providers: Shows how providers are configured in the config
5229 file.
5230
5231 Returns a JSON object: - providers - array of objects
5232
5233 See the config providers command (https://rclone.org/com‐
5234 mands/rclone_config_providers/) command for more information on the
5235 above.
5236
5237 Authentication is required for this call.
5238
5239 config/update: update the config for a remote.
5240 This takes the following parameters
5241
5242 · name - name of remote
5243
5244 · type - type of new remote
5245
5246 See the config update command (https://rclone.org/commands/rclone_con‐
5247 fig_update/) command for more information on the above.
5248
5249 Authentication is required for this call.
5250
5251 core/bwlimit: Set the bandwidth limit.
5252 This sets the bandwidth limit to that passed in.
5253
5254 Eg
5255
5256 rclone rc core/bwlimit rate=1M
5257 rclone rc core/bwlimit rate=off
5258
5259 The format of the parameter is exactly the same as passed to –bwlimit
5260 except only one bandwidth may be specified.
5261
5262 core/gc: Runs a garbage collection.
5263 This tells the go runtime to do a garbage collection run. It isn't
5264 necessary to call this normally, but it can be useful for debugging
5265 memory problems.
5266
5267 core/memstats: Returns the memory statistics
5268 This returns the memory statistics of the running program. What the
5269 values mean are explained in the go docs: https://golang.org/pkg/run‐
5270 time/#MemStats
5271
5272 The most interesting values for most people are:
5273
5274 · HeapAlloc: This is the amount of memory rclone is actually using
5275
5276 · HeapSys: This is the amount of memory rclone has obtained from the OS
5277
5278 · Sys: this is the total amount of memory requested from the OS
5279
5280 · It is virtual memory so may include unused memory
5281
5282 core/obscure: Obscures a string passed in.
5283 Pass a clear string and rclone will obscure it for the config file: -
5284 clear - string
5285
5286 Returns - obscured - string
5287
5288 core/pid: Return PID of current process
5289 This returns PID of current process. Useful for stopping rclone
5290 process.
5291
5292 core/stats: Returns stats about current transfers.
5293 This returns all available stats
5294
5295 rclone rc core/stats
5296
5297 Returns the following values:
5298
5299 {
5300 "speed": average speed in bytes/sec since start of the process,
5301 "bytes": total transferred bytes since the start of the process,
5302 "errors": number of errors,
5303 "fatalError": whether there has been at least one FatalError,
5304 "retryError": whether there has been at least one non-NoRetryError,
5305 "checks": number of checked files,
5306 "transfers": number of transferred files,
5307 "deletes" : number of deleted files,
5308 "elapsedTime": time in seconds since the start of the process,
5309 "lastError": last occurred error,
5310 "transferring": an array of currently active file transfers:
5311 [
5312 {
5313 "bytes": total transferred bytes for this file,
5314 "eta": estimated time in seconds until file transfer completion
5315 "name": name of the file,
5316 "percentage": progress of the file transfer in percent,
5317 "speed": speed in bytes/sec,
5318 "speedAvg": speed in bytes/sec as an exponentially weighted moving average,
5319 "size": size of the file in bytes
5320 }
5321 ],
5322 "checking": an array of names of currently active file checks
5323 []
5324 }
5325
5326 Values for “transferring”, “checking” and “lastError” are only assigned
5327 if data is available. The value for “eta” is null if an eta cannot be
5328 determined.
5329
5330 core/version: Shows the current version of rclone and the go
5331 runtime.
5332
5333 This shows the current version of go and the go runtime - version -
5334 rclone version, eg “v1.44” - decomposed - version number as [major, mi‐
5335 nor, patch, subpatch] - note patch and subpatch will be 999 for a git
5336 compiled version - isGit - boolean - true if this was compiled from the
5337 git version - os - OS in use as according to Go - arch - cpu architec‐
5338 ture in use according to Go - goVersion - version of Go runtime in use
5339
5340 job/list: Lists the IDs of the running jobs
5341 Parameters - None
5342
5343 Results - jobids - array of integer job ids
5344
5345 job/status: Reads the status of the job ID
5346 Parameters - jobid - id of the job (integer)
5347
5348 Results - finished - boolean - duration - time in seconds that the job
5349 ran for - endTime - time the job finished (eg
5350 “2018-10-26T18:50:20.528746884+01:00”) - error - error from the job or
5351 empty string for no error - finished - boolean whether the job has fin‐
5352 ished or not - id - as passed in above - startTime - time the job
5353 started (eg “2018-10-26T18:50:20.528336039+01:00”) - success - boolean
5354 - true for success false otherwise - output - output of the job as
5355 would have been returned if called synchronously
5356
5357 operations/about: Return the space used on the remote
5358 This takes the following parameters
5359
5360 · fs - a remote name string eg “drive:”
5361
5362 · remote - a path within that remote eg “dir”
5363
5364 The result is as returned from rclone about –json
5365
5366 Authentication is required for this call.
5367
5368 operations/cleanup: Remove trashed files in the remote or path
5369 This takes the following parameters
5370
5371 · fs - a remote name string eg “drive:”
5372
5373 See the cleanup command (https://rclone.org/commands/rclone_cleanup/)
5374 command for more information on the above.
5375
5376 Authentication is required for this call.
5377
5378 operations/copyfile: Copy a file from source remote to destination
5379 remote
5380
5381 This takes the following parameters
5382
5383 · srcFs - a remote name string eg “drive:” for the source
5384
5385 · srcRemote - a path within that remote eg “file.txt” for the source
5386
5387 · dstFs - a remote name string eg “drive2:” for the destination
5388
5389 · dstRemote - a path within that remote eg “file2.txt” for the destina‐
5390 tion
5391
5392 Authentication is required for this call.
5393
5394 operations/copyurl: Copy the URL to the object
5395 This takes the following parameters
5396
5397 · fs - a remote name string eg “drive:”
5398
5399 · remote - a path within that remote eg “dir”
5400
5401 · url - string, URL to read from
5402
5403 See the copyurl command (https://rclone.org/commands/rclone_copyurl/)
5404 command for more information on the above.
5405
5406 Authentication is required for this call.
5407
5408 operations/delete: Remove files in the path
5409 This takes the following parameters
5410
5411 · fs - a remote name string eg “drive:”
5412
5413 See the delete command (https://rclone.org/commands/rclone_delete/)
5414 command for more information on the above.
5415
5416 Authentication is required for this call.
5417
5418 operations/deletefile: Remove the single file pointed to
5419 This takes the following parameters
5420
5421 · fs - a remote name string eg “drive:”
5422
5423 · remote - a path within that remote eg “dir”
5424
5425 See the deletefile command (https://rclone.org/commands/rclone_delete‐
5426 file/) command for more information on the above.
5427
5428 Authentication is required for this call.
5429
5430 operations/list: List the given remote and path in JSON format
5431 This takes the following parameters
5432
5433 · fs - a remote name string eg “drive:”
5434
5435 · remote - a path within that remote eg “dir”
5436
5437 · opt - a dictionary of options to control the listing (optional)
5438
5439 · recurse - If set recurse directories
5440
5441 · noModTime - If set return modification time
5442
5443 · showEncrypted - If set show decrypted names
5444
5445 · showOrigIDs - If set show the IDs for each item if known
5446
5447 · showHash - If set return a dictionary of hashes
5448
5449 The result is
5450
5451 · list
5452
5453 · This is an array of objects as described in the lsjson command
5454
5455 See the lsjson command for more information on the above and examples.
5456
5457 Authentication is required for this call.
5458
5459 operations/mkdir: Make a destination directory or container
5460 This takes the following parameters
5461
5462 · fs - a remote name string eg “drive:”
5463
5464 · remote - a path within that remote eg “dir”
5465
5466 See the mkdir command (https://rclone.org/commands/rclone_mkdir/) com‐
5467 mand for more information on the above.
5468
5469 Authentication is required for this call.
5470
5471 operations/movefile: Move a file from source remote to destination
5472 remote
5473
5474 This takes the following parameters
5475
5476 · srcFs - a remote name string eg “drive:” for the source
5477
5478 · srcRemote - a path within that remote eg “file.txt” for the source
5479
5480 · dstFs - a remote name string eg “drive2:” for the destination
5481
5482 · dstRemote - a path within that remote eg “file2.txt” for the destina‐
5483 tion
5484
5485 Authentication is required for this call.
5486
5487 operations/publiclink: Create or retrieve a public link to the given
5488 file or folder.
5489
5490 This takes the following parameters
5491
5492 · fs - a remote name string eg “drive:”
5493
5494 · remote - a path within that remote eg “dir”
5495
5496 Returns
5497
5498 · url - URL of the resource
5499
5500 See the link command (https://rclone.org/commands/rclone_link/) command
5501 for more information on the above.
5502
5503 Authentication is required for this call.
5504
5505 operations/purge: Remove a directory or container and all of its
5506 contents
5507
5508 This takes the following parameters
5509
5510 · fs - a remote name string eg “drive:”
5511
5512 · remote - a path within that remote eg “dir”
5513
5514 See the purge command (https://rclone.org/commands/rclone_purge/) com‐
5515 mand for more information on the above.
5516
5517 Authentication is required for this call.
5518
5519 operations/rmdir: Remove an empty directory or container
5520 This takes the following parameters
5521
5522 · fs - a remote name string eg “drive:”
5523
5524 · remote - a path within that remote eg “dir”
5525
5526 See the rmdir command (https://rclone.org/commands/rclone_rmdir/) com‐
5527 mand for more information on the above.
5528
5529 Authentication is required for this call.
5530
5531 operations/rmdirs: Remove all the empty directories in the path
5532 This takes the following parameters
5533
5534 · fs - a remote name string eg “drive:”
5535
5536 · remote - a path within that remote eg “dir”
5537
5538 · leaveRoot - boolean, set to true not to delete the root
5539
5540 See the rmdirs command (https://rclone.org/commands/rclone_rmdirs/)
5541 command for more information on the above.
5542
5543 Authentication is required for this call.
5544
5545 operations/size: Count the number of bytes and files in remote
5546 This takes the following parameters
5547
5548 · fs - a remote name string eg “drive:path/to/dir”
5549
5550 Returns
5551
5552 · count - number of files
5553
5554 · bytes - number of bytes in those files
5555
5556 See the size command (https://rclone.org/commands/rclone_size/) command
5557 for more information on the above.
5558
5559 Authentication is required for this call.
5560
5561 options/blocks: List all the option blocks
5562 Returns - options - a list of the options block names
5563
5564 options/get: Get all the options
5565 Returns an object where keys are option block names and values are an
5566 object with the current option values in.
5567
5568 This shows the internal names of the option within rclone which should
5569 map to the external options very easily with a few exceptions.
5570
5571 options/set: Set an option
5572 Parameters
5573
5574 · option block name containing an object with
5575
5576 · key: value
5577
5578 Repeated as often as required.
5579
5580 Only supply the options you wish to change. If an option is unknown it
5581 will be silently ignored. Not all options will have an effect when
5582 changed like this.
5583
5584 For example:
5585
5586 This sets DEBUG level logs (-vv)
5587
5588 rclone rc options/set --json '{"main": {"LogLevel": 8}}'
5589
5590 And this sets INFO level logs (-v)
5591
5592 rclone rc options/set --json '{"main": {"LogLevel": 7}}'
5593
5594 And this sets NOTICE level logs (normal without -v)
5595
5596 rclone rc options/set --json '{"main": {"LogLevel": 6}}'
5597
5598 rc/error: This returns an error
5599 This returns an error with the input as part of its error string. Use‐
5600 ful for testing error handling.
5601
5602 rc/list: List all the registered remote control commands
5603 This lists all the registered remote control commands as a JSON map in
5604 the commands response.
5605
5606 rc/noop: Echo the input to the output parameters
5607 This echoes the input parameters to the output parameters for testing
5608 purposes. It can be used to check that rclone is still alive and to
5609 check that parameter passing is working properly.
5610
5611 rc/noopauth: Echo the input to the output parameters requiring auth
5612 This echoes the input parameters to the output parameters for testing
5613 purposes. It can be used to check that rclone is still alive and to
5614 check that parameter passing is working properly.
5615
5616 Authentication is required for this call.
5617
5618 sync/copy: copy a directory from source remote to destination remote
5619 This takes the following parameters
5620
5621 · srcFs - a remote name string eg “drive:src” for the source
5622
5623 · dstFs - a remote name string eg “drive:dst” for the destination
5624
5625 See the copy command (https://rclone.org/commands/rclone_copy/) command
5626 for more information on the above.
5627
5628 Authentication is required for this call.
5629
5630 sync/move: move a directory from source remote to destination remote
5631 This takes the following parameters
5632
5633 · srcFs - a remote name string eg “drive:src” for the source
5634
5635 · dstFs - a remote name string eg “drive:dst” for the destination
5636
5637 · deleteEmptySrcDirs - delete empty src directories if set
5638
5639 See the move command (https://rclone.org/commands/rclone_move/) command
5640 for more information on the above.
5641
5642 Authentication is required for this call.
5643
5644 sync/sync: sync a directory from source remote to destination remote
5645 This takes the following parameters
5646
5647 · srcFs - a remote name string eg “drive:src” for the source
5648
5649 · dstFs - a remote name string eg “drive:dst” for the destination
5650
5651 See the sync command (https://rclone.org/commands/rclone_sync/) command
5652 for more information on the above.
5653
5654 Authentication is required for this call.
5655
5656 vfs/forget: Forget files or directories in the directory cache.
5657 This forgets the paths in the directory cache causing them to be
5658 re-read from the remote when needed.
5659
5660 If no paths are passed in then it will forget all the paths in the di‐
5661 rectory cache.
5662
5663 rclone rc vfs/forget
5664
5665 Otherwise pass files or dirs in as file=path or dir=path. Any parame‐
5666 ter key starting with file will forget that file and any starting with
5667 dir will forget that dir, eg
5668
5669 rclone rc vfs/forget file=hello file2=goodbye dir=home/junk
5670
5671 vfs/poll-interval: Get the status or update the value of the
5672 poll-interval option.
5673
5674 Without any parameter given this returns the current status of the
5675 poll-interval setting.
5676
5677 When the interval=duration parameter is set, the poll-interval value is
5678 updated and the polling function is notified. Setting interval=0 dis‐
5679 ables poll-interval.
5680
5681 rclone rc vfs/poll-interval interval=5m
5682
5683 The timeout=duration parameter can be used to specify a time to wait
5684 for the current poll function to apply the new value. If timeout is
5685 less or equal 0, which is the default, wait indefinitely.
5686
5687 The new poll-interval value will only be active when the timeout is not
5688 reached.
5689
5690 If poll-interval is updated or disabled temporarily, some changes might
5691 not get picked up by the polling function, depending on the used re‐
5692 mote.
5693
5694 vfs/refresh: Refresh the directory cache.
5695 This reads the directories for the specified paths and freshens the di‐
5696 rectory cache.
5697
5698 If no paths are passed in then it will refresh the root directory.
5699
5700 rclone rc vfs/refresh
5701
5702 Otherwise pass directories in as dir=path. Any parameter key starting
5703 with dir will refresh that directory, eg
5704
5705 rclone rc vfs/refresh dir=home/junk dir2=data/misc
5706
5707 If the parameter recursive=true is given the whole directory tree will
5708 get refreshed. This refresh will use –fast-list if enabled.
5709
5710 Accessing the remote control via HTTP
5711 Rclone implements a simple HTTP based protocol.
5712
5713 Each endpoint takes an JSON object and returns a JSON object or an er‐
5714 ror. The JSON objects are essentially a map of string names to values.
5715
5716 All calls must made using POST.
5717
5718 The input objects can be supplied using URL parameters, POST parameters
5719 or by supplying “Content-Type: application/json” and a JSON blob in the
5720 body. There are examples of these below using curl.
5721
5722 The response will be a JSON blob in the body of the response. This is
5723 formatted to be reasonably human readable.
5724
5725 Error returns
5726 If an error occurs then there will be an HTTP error status (eg 500) and
5727 the body of the response will contain a JSON encoded error object, eg
5728
5729 {
5730 "error": "Expecting string value for key \"remote\" (was float64)",
5731 "input": {
5732 "fs": "/tmp",
5733 "remote": 3
5734 },
5735 "status": 400
5736 "path": "operations/rmdir",
5737 }
5738
5739 The keys in the error response are - error - error string - input - the
5740 input parameters to the call - status - the HTTP status code - path -
5741 the path of the call
5742
5743 CORS
5744 The sever implements basic CORS support and allows all origins for
5745 that. The response to a preflight OPTIONS request will echo the re‐
5746 quested “Access-Control-Request-Headers” back.
5747
5748 Using POST with URL parameters only
5749 curl -X POST 'http://localhost:5572/rc/noop?potato=1&sausage=2'
5750
5751 Response
5752
5753 {
5754 "potato": "1",
5755 "sausage": "2"
5756 }
5757
5758 Here is what an error response looks like:
5759
5760 curl -X POST 'http://localhost:5572/rc/error?potato=1&sausage=2'
5761
5762 {
5763 "error": "arbitrary error on input map[potato:1 sausage:2]",
5764 "input": {
5765 "potato": "1",
5766 "sausage": "2"
5767 }
5768 }
5769
5770 Note that curl doesn't return errors to the shell unless you use the -f
5771 option
5772
5773 $ curl -f -X POST 'http://localhost:5572/rc/error?potato=1&sausage=2'
5774 curl: (22) The requested URL returned error: 400 Bad Request
5775 $ echo $?
5776 22
5777
5778 Using POST with a form
5779 curl --data "potato=1" --data "sausage=2" http://localhost:5572/rc/noop
5780
5781 Response
5782
5783 {
5784 "potato": "1",
5785 "sausage": "2"
5786 }
5787
5788 Note that you can combine these with URL parameters too with the POST
5789 parameters taking precedence.
5790
5791 curl --data "potato=1" --data "sausage=2" "http://localhost:5572/rc/noop?rutabaga=3&sausage=4"
5792
5793 Response
5794
5795 {
5796 "potato": "1",
5797 "rutabaga": "3",
5798 "sausage": "4"
5799 }
5800
5801 Using POST with a JSON blob
5802 curl -H "Content-Type: application/json" -X POST -d '{"potato":2,"sausage":1}' http://localhost:5572/rc/noop
5803
5804 response
5805
5806 {
5807 "password": "xyz",
5808 "username": "xyz"
5809 }
5810
5811 This can be combined with URL parameters too if required. The JSON
5812 blob takes precedence.
5813
5814 curl -H "Content-Type: application/json" -X POST -d '{"potato":2,"sausage":1}' 'http://localhost:5572/rc/noop?rutabaga=3&potato=4'
5815
5816 {
5817 "potato": 2,
5818 "rutabaga": "3",
5819 "sausage": 1
5820 }
5821
5822 Debugging rclone with pprof
5823 If you use the --rc flag this will also enable the use of the go pro‐
5824 filing tools on the same port.
5825
5826 To use these, first install go (https://golang.org/doc/install).
5827
5828 Debugging memory use
5829 To profile rclone's memory use you can run:
5830
5831 go tool pprof -web http://localhost:5572/debug/pprof/heap
5832
5833 This should open a page in your browser showing what is using what mem‐
5834 ory.
5835
5836 You can also use the -text flag to produce a textual summary
5837
5838 $ go tool pprof -text http://localhost:5572/debug/pprof/heap
5839 Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
5840 flat flat% sum% cum cum%
5841 1024.03kB 66.62% 66.62% 1024.03kB 66.62% github.com/ncw/rclone/vendor/golang.org/x/net/http2/hpack.addDecoderNode
5842 513kB 33.38% 100% 513kB 33.38% net/http.newBufioWriterSize
5843 0 0% 100% 1024.03kB 66.62% github.com/ncw/rclone/cmd/all.init
5844 0 0% 100% 1024.03kB 66.62% github.com/ncw/rclone/cmd/serve.init
5845 0 0% 100% 1024.03kB 66.62% github.com/ncw/rclone/cmd/serve/restic.init
5846 0 0% 100% 1024.03kB 66.62% github.com/ncw/rclone/vendor/golang.org/x/net/http2.init
5847 0 0% 100% 1024.03kB 66.62% github.com/ncw/rclone/vendor/golang.org/x/net/http2/hpack.init
5848 0 0% 100% 1024.03kB 66.62% github.com/ncw/rclone/vendor/golang.org/x/net/http2/hpack.init.0
5849 0 0% 100% 1024.03kB 66.62% main.init
5850 0 0% 100% 513kB 33.38% net/http.(*conn).readRequest
5851 0 0% 100% 513kB 33.38% net/http.(*conn).serve
5852 0 0% 100% 1024.03kB 66.62% runtime.main
5853
5854 Debugging go routine leaks
5855 Memory leaks are most often caused by go routine leaks keeping memory
5856 alive which should have been garbage collected.
5857
5858 See all active go routines using
5859
5860 curl http://localhost:5572/debug/pprof/goroutine?debug=1
5861
5862 Or go to http://localhost:5572/debug/pprof/goroutine?debug=1 in your
5863 browser.
5864
5865 Other profiles to look at
5866 You can see a summary of profiles available at http://local‐
5867 host:5572/debug/pprof/
5868
5869 Here is how to use some of them:
5870
5871 · Memory: go tool pprof http://localhost:5572/debug/pprof/heap
5872
5873 · Go routines: curl http://localhost:5572/debug/pprof/goroutine?debug=1
5874
5875 · 30-second CPU profile: go tool pprof http://localhost:5572/de‐
5876 bug/pprof/profile
5877
5878 · 5-second execution trace: wget http://localhost:5572/de‐
5879 bug/pprof/trace?seconds=5
5880
5881 See the net/http/pprof docs (https://golang.org/pkg/net/http/pprof/)
5882 for more info on how to use the profiling and for a general overview
5883 see the Go team's blog post on profiling go programs
5884 (https://blog.golang.org/profiling-go-programs).
5885
5886 The profiling hook is zero overhead unless it is used (https://stack‐
5887 overflow.com/q/26545159/164234).
5888
5890 Each cloud storage system is slightly different. Rclone attempts to
5891 provide a unified interface to them, but some underlying differences
5892 show through.
5893
5894 Features
5895 Here is an overview of the major features of each cloud storage system.
5896
5897 Name Hash ModTime Case Insen‐ Duplicate MIME Type
5898 sitive Files
5899 ──────────────────────────────────────────────────────────────────────────
5900 Amazon MD5 No Yes No R
5901 Drive
5902 Amazon S3 MD5 Yes No No R/W
5903 Backblaze SHA1 Yes No No R/W
5904 B2
5905 Box SHA1 Yes Yes No -
5906 Dropbox DBHASH † Yes Yes No -
5907 FTP - No No No -
5908 Google MD5 Yes No No R/W
5909 Cloud Stor‐
5910 age
5911 Google MD5 Yes No Yes R/W
5912 Drive
5913 HTTP - No No No R
5914 Hubic MD5 Yes No No R/W
5915 Jottacloud MD5 Yes Yes No R/W
5916 Koofr MD5 No Yes No -
5917 Mega - No No Yes -
5918 Microsoft MD5 Yes No No R/W
5919 Azure Blob
5920 Storage
5921 Microsoft SHA1 ‡‡ Yes Yes No R
5922 OneDrive
5923 OpenDrive MD5 Yes Yes No -
5924 Openstack MD5 Yes No No R/W
5925 Swift
5926 pCloud MD5, SHA1 Yes No No W
5927 QingStor MD5 No No No R/W
5928 SFTP MD5, SHA1 ‡ Yes Depends No -
5929 WebDAV MD5, SHA1 Yes ††† Depends No -
5930 ††
5931 Yandex Disk MD5 Yes No No R/W
5932 The local All Yes Depends No -
5933 filesystem
5934
5935 Hash
5936 The cloud storage system supports various hash types of the objects.
5937 The hashes are used when transferring data as an integrity check and
5938 can be specifically used with the --checksum flag in syncs and in the
5939 check command.
5940
5941 To use the verify checksums when transferring between cloud storage
5942 systems they must support a common hash type.
5943
5944 † Note that Dropbox supports its own custom hash (https://www.drop‐
5945 box.com/developers/reference/content-hash). This is an SHA256 sum of
5946 all the 4MB block SHA256s.
5947
5948 ‡ SFTP supports checksums if the same login has shell access and md5sum
5949 or sha1sum as well as echo are in the remote's PATH.
5950
5951 †† WebDAV supports hashes when used with Owncloud and Nextcloud only.
5952
5953 ††† WebDAV supports modtimes when used with Owncloud and Nextcloud on‐
5954 ly.
5955
5956 ‡‡ Microsoft OneDrive Personal supports SHA1 hashes, whereas OneDrive
5957 for business and SharePoint server support Microsoft's own QuickXorHash
5958 (https://docs.microsoft.com/en-us/onedrive/developer/code-snip‐
5959 pets/quickxorhash).
5960
5961 ModTime
5962 The cloud storage system supports setting modification times on ob‐
5963 jects. If it does then this enables a using the modification times as
5964 part of the sync. If not then only the size will be checked by de‐
5965 fault, though the MD5SUM can be checked with the --checksum flag.
5966
5967 All cloud storage systems support some kind of date on the object and
5968 these will be set when transferring from the cloud storage system.
5969
5970 Case Insensitive
5971 If a cloud storage systems is case sensitive then it is possible to
5972 have two files which differ only in case, eg file.txt and FILE.txt. If
5973 a cloud storage system is case insensitive then that isn't possible.
5974
5975 This can cause problems when syncing between a case insensitive system
5976 and a case sensitive system. The symptom of this is that no matter how
5977 many times you run the sync it never completes fully.
5978
5979 The local filesystem and SFTP may or may not be case sensitive depend‐
5980 ing on OS.
5981
5982 · Windows - usually case insensitive, though case is preserved
5983
5984 · OSX - usually case insensitive, though it is possible to format case
5985 sensitive
5986
5987 · Linux - usually case sensitive, but there are case insensitive file
5988 systems (eg FAT formatted USB keys)
5989
5990 Most of the time this doesn't cause any problems as people tend to
5991 avoid files whose name differs only by case even on case sensitive sys‐
5992 tems.
5993
5994 Duplicate files
5995 If a cloud storage system allows duplicate files then it can have two
5996 objects with the same name.
5997
5998 This confuses rclone greatly when syncing - use the rclone dedupe com‐
5999 mand to rename or remove duplicates.
6000
6001 MIME Type
6002 MIME types (also known as media types) classify types of documents us‐
6003 ing a simple text classification, eg text/html or application/pdf.
6004
6005 Some cloud storage systems support reading (R) the MIME type of objects
6006 and some support writing (W) the MIME type of objects.
6007
6008 The MIME type can be important if you are serving files directly to
6009 HTTP from the storage system.
6010
6011 If you are copying from a remote which supports reading (R) to a remote
6012 which supports writing (W) then rclone will preserve the MIME types.
6013 Otherwise they will be guessed from the extension, or the remote itself
6014 may assign the MIME type.
6015
6016 Optional Features
6017 All the remotes support a basic set of features, but there are some op‐
6018 tional features supported by some remotes used to make some operations
6019 more efficient.
6020
6021 Name Purge Copy Move DirMove CleanUp ListR StreamU‐ LinkShar‐ About
6022 pload ing
6023 ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
6024 Amazon Yes No Yes Yes No #575 No No No #2178 No
6025 Drive (https://github.com/ncw/rclone/is‐ (https://github.com/ncw/rclone/is‐
6026 sues/575) sues/2178)
6027 Amazon No Yes No No No Yes Yes No #2178 No
6028 S3 (https://github.com/ncw/rclone/is‐
6029 sues/2178)
6030 Back‐ No No No No Yes Yes Yes No #2178 No
6031 blaze (https://github.com/ncw/rclone/is‐
6032 B2 sues/2178)
6033 Box Yes Yes Yes Yes No #575 No Yes Yes No
6034 (https://github.com/ncw/rclone/is‐
6035 sues/575)
6036 Dropbox Yes Yes Yes Yes No #575 No Yes Yes Yes
6037 (https://github.com/ncw/rclone/is‐
6038 sues/575)
6039 FTP No No Yes Yes No No Yes No #2178 No
6040 (https://github.com/ncw/rclone/is‐
6041 sues/2178)
6042 Google Yes Yes No No No Yes Yes No #2178 No
6043 Cloud (https://github.com/ncw/rclone/is‐
6044 Storage sues/2178)
6045 Google Yes Yes Yes Yes Yes Yes Yes Yes Yes
6046 Drive
6047 HTTP No No No No No No No No #2178 No
6048 (https://github.com/ncw/rclone/is‐
6049 sues/2178)
6050 Hubic Yes † Yes No No No Yes Yes No #2178 Yes
6051 (https://github.com/ncw/rclone/is‐
6052 sues/2178)
6053 Jotta‐ Yes Yes Yes Yes No Yes No Yes Yes
6054 cloud
6055 Mega Yes No Yes Yes No No No No #2178 Yes
6056 (https://github.com/ncw/rclone/is‐
6057 sues/2178)
6058 Micro‐ Yes Yes No No No Yes No No #2178 No
6059 soft (https://github.com/ncw/rclone/is‐
6060 Azure sues/2178)
6061 Blob
6062 Storage
6063 Micro‐ Yes Yes Yes Yes No #575 No No Yes Yes
6064 soft (https://github.com/ncw/rclone/is‐
6065 OneDrive sues/575)
6066 Open‐ Yes Yes Yes Yes No No No No No
6067 Drive
6068 Open‐ Yes † Yes No No No Yes Yes No #2178 Yes
6069 stack (https://github.com/ncw/rclone/is‐
6070 Swift sues/2178)
6071 pCloud Yes Yes Yes Yes Yes No No No #2178 Yes
6072 (https://github.com/ncw/rclone/is‐
6073 sues/2178)
6074 QingStor No Yes No No No Yes No No #2178 No
6075 (https://github.com/ncw/rclone/is‐
6076 sues/2178)
6077
6078
6079
6080 SFTP No No Yes Yes No No Yes No #2178 No
6081 (https://github.com/ncw/rclone/is‐
6082 sues/2178)
6083 WebDAV Yes Yes Yes Yes No No Yes ‡ No #2178 Yes
6084 (https://github.com/ncw/rclone/is‐
6085 sues/2178)
6086 Yandex Yes Yes Yes Yes Yes No Yes Yes Yes
6087 Disk
6088 The lo‐ Yes No Yes Yes No No Yes No Yes
6089 cal
6090 filesys‐
6091 tem
6092
6093 Purge
6094 This deletes a directory quicker than just deleting all the files in
6095 the directory.
6096
6097 † Note Swift and Hubic implement this in order to delete directory
6098 markers but they don't actually have a quicker way of deleting files
6099 other than deleting them individually.
6100
6101 ‡ StreamUpload is not supported with Nextcloud
6102
6103 Copy
6104 Used when copying an object to and from the same remote. This known as
6105 a server side copy so you can copy a file without downloading it and
6106 uploading it again. It is used if you use rclone copy or rclone move
6107 if the remote doesn't support Move directly.
6108
6109 If the server doesn't support Copy directly then for copy operations
6110 the file is downloaded then re-uploaded.
6111
6112 Move
6113 Used when moving/renaming an object on the same remote. This is known
6114 as a server side move of a file. This is used in rclone move if the
6115 server doesn't support DirMove.
6116
6117 If the server isn't capable of Move then rclone simulates it with Copy
6118 then delete. If the server doesn't support Copy then rclone will down‐
6119 load the file and re-upload it.
6120
6121 DirMove
6122 This is used to implement rclone move to move a directory if possible.
6123 If it isn't then it will use Move on each file (which falls back to
6124 Copy then download and upload - see Move section).
6125
6126 CleanUp
6127 This is used for emptying the trash for a remote by rclone cleanup.
6128
6129 If the server can't do CleanUp then rclone cleanup will return an er‐
6130 ror.
6131
6132 ListR
6133 The remote supports a recursive list to list all the contents beneath a
6134 directory quickly. This enables the --fast-list flag to work. See the
6135 rclone docs (/docs/#fast-list) for more details.
6136
6137 StreamUpload
6138 Some remotes allow files to be uploaded without knowing the file size
6139 in advance. This allows certain operations to work without spooling
6140 the file to local disk first, e.g. rclone rcat.
6141
6142 LinkSharing
6143 Sets the necessary permissions on a file or folder and prints a link
6144 that allows others to access them, even if they don't have an account
6145 on the particular cloud provider.
6146
6147 About
6148 This is used to fetch quota information from the remote, like bytes
6149 used/free/quota and bytes used in the trash.
6150
6151 This is also used to return the space used, available for rclone mount.
6152
6153 If the server can't do About then rclone about will return an error.
6154
6155 Alias
6156 The alias remote provides a new name for another remote.
6157
6158 Paths may be as deep as required or a local path, eg remote:directo‐
6159 ry/subdirectory or /directory/subdirectory.
6160
6161 During the initial setup with rclone config you will specify the target
6162 remote. The target remote can either be a local path or another re‐
6163 mote.
6164
6165 Subfolders can be used in target remote. Assume a alias remote named
6166 backup with the target mydrive:private/backup. Invoking
6167 rclone mkdir backup:desktop is exactly the same as invoking
6168 rclone mkdir mydrive:private/backup/desktop.
6169
6170 There will be no special handling of paths containing .. segments.
6171 Invoking rclone mkdir backup:../desktop is exactly the same as invoking
6172 rclone mkdir mydrive:private/backup/../desktop. The empty path is not
6173 allowed as a remote. To alias the current directory use . instead.
6174
6175 Here is an example of how to make a alias called remote for local fold‐
6176 er. First run:
6177
6178 rclone config
6179
6180 This will guide you through an interactive setup process:
6181
6182 No remotes found - make a new one
6183 n) New remote
6184 s) Set configuration password
6185 q) Quit config
6186 n/s/q> n
6187 name> remote
6188 Type of storage to configure.
6189 Choose a number from below, or type in your own value
6190 1 / Alias for a existing remote
6191 \ "alias"
6192 2 / Amazon Drive
6193 \ "amazon cloud drive"
6194 3 / Amazon S3 (also Dreamhost, Ceph, Minio)
6195 \ "s3"
6196 4 / Backblaze B2
6197 \ "b2"
6198 5 / Box
6199 \ "box"
6200 6 / Cache a remote
6201 \ "cache"
6202 7 / Dropbox
6203 \ "dropbox"
6204 8 / Encrypt/Decrypt a remote
6205 \ "crypt"
6206 9 / FTP Connection
6207 \ "ftp"
6208 10 / Google Cloud Storage (this is not Google Drive)
6209 \ "google cloud storage"
6210 11 / Google Drive
6211 \ "drive"
6212 12 / Hubic
6213 \ "hubic"
6214 13 / Local Disk
6215 \ "local"
6216 14 / Microsoft Azure Blob Storage
6217 \ "azureblob"
6218 15 / Microsoft OneDrive
6219 \ "onedrive"
6220 16 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
6221 \ "swift"
6222 17 / Pcloud
6223 \ "pcloud"
6224 18 / QingCloud Object Storage
6225 \ "qingstor"
6226 19 / SSH/SFTP Connection
6227 \ "sftp"
6228 20 / Webdav
6229 \ "webdav"
6230 21 / Yandex Disk
6231 \ "yandex"
6232 22 / http Connection
6233 \ "http"
6234 Storage> 1
6235 Remote or path to alias.
6236 Can be "myremote:path/to/dir", "myremote:bucket", "myremote:" or "/local/path".
6237 remote> /mnt/storage/backup
6238 Remote config
6239 --------------------
6240 [remote]
6241 remote = /mnt/storage/backup
6242 --------------------
6243 y) Yes this is OK
6244 e) Edit this remote
6245 d) Delete this remote
6246 y/e/d> y
6247 Current remotes:
6248
6249 Name Type
6250 ==== ====
6251 remote alias
6252
6253 e) Edit existing remote
6254 n) New remote
6255 d) Delete remote
6256 r) Rename remote
6257 c) Copy remote
6258 s) Set configuration password
6259 q) Quit config
6260 e/n/d/r/c/s/q> q
6261
6262 Once configured you can then use rclone like this,
6263
6264 List directories in top level in /mnt/storage/backup
6265
6266 rclone lsd remote:
6267
6268 List all the files in /mnt/storage/backup
6269
6270 rclone ls remote:
6271
6272 Copy another local directory to the alias directory called source
6273
6274 rclone copy /home/source remote:source
6275
6276 Standard Options
6277 Here are the standard options specific to alias (Alias for a existing
6278 remote).
6279
6280 –alias-remote
6281 Remote or path to alias. Can be “myremote:path/to/dir”, “myre‐
6282 mote:bucket”, “myremote:” or “/local/path”.
6283
6284 · Config: remote
6285
6286 · Env Var: RCLONE_ALIAS_REMOTE
6287
6288 · Type: string
6289
6290 · Default: ""
6291
6292 Amazon Drive
6293 Amazon Drive, formerly known as Amazon Cloud Drive, is a cloud storage
6294 service run by Amazon for consumers.
6295
6296 Status
6297 Important: rclone supports Amazon Drive only if you have your own set
6298 of API keys. Unfortunately the Amazon Drive developer program
6299 (https://developer.amazon.com/amazon-drive) is now closed to new en‐
6300 tries so if you don't already have your own set of keys you will not be
6301 able to use rclone with Amazon Drive.
6302
6303 For the history on why rclone no longer has a set of Amazon Drive API
6304 keys see the forum (https://forum.rclone.org/t/rclone-has-been-banned-
6305 from-amazon-drive/2314).
6306
6307 If you happen to know anyone who works at Amazon then please ask them
6308 to re-instate rclone into the Amazon Drive developer program - thanks!
6309
6310 Setup
6311 The initial setup for Amazon Drive involves getting a token from Amazon
6312 which you need to do in your browser. rclone config walks you through
6313 it.
6314
6315 The configuration process for Amazon Drive may involve using an oauth
6316 proxy (https://github.com/ncw/oauthproxy). This is used to keep the
6317 Amazon credentials out of the source code. The proxy runs in Google's
6318 very secure App Engine environment and doesn't store any credentials
6319 which pass through it.
6320
6321 Since rclone doesn't currently have its own Amazon Drive credentials so
6322 you will either need to have your own client_id and client_secret with
6323 Amazon Drive, or use a a third party ouath proxy in which case you will
6324 need to enter client_id, client_secret, auth_url and token_url.
6325
6326 Note also if you are not using Amazon's auth_url and token_url, (ie you
6327 filled in something for those) then if setting up on a remote machine
6328 you can only use the copying the config method of configuration
6329 (https://rclone.org/remote_setup/#configuring-by-copying-the-config-
6330 file) - rclone authorize will not work.
6331
6332 Here is an example of how to make a remote called remote. First run:
6333
6334 rclone config
6335
6336 This will guide you through an interactive setup process:
6337
6338 No remotes found - make a new one
6339 n) New remote
6340 r) Rename remote
6341 c) Copy remote
6342 s) Set configuration password
6343 q) Quit config
6344 n/r/c/s/q> n
6345 name> remote
6346 Type of storage to configure.
6347 Choose a number from below, or type in your own value
6348 1 / Amazon Drive
6349 \ "amazon cloud drive"
6350 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
6351 \ "s3"
6352 3 / Backblaze B2
6353 \ "b2"
6354 4 / Dropbox
6355 \ "dropbox"
6356 5 / Encrypt/Decrypt a remote
6357 \ "crypt"
6358 6 / FTP Connection
6359 \ "ftp"
6360 7 / Google Cloud Storage (this is not Google Drive)
6361 \ "google cloud storage"
6362 8 / Google Drive
6363 \ "drive"
6364 9 / Hubic
6365 \ "hubic"
6366 10 / Local Disk
6367 \ "local"
6368 11 / Microsoft OneDrive
6369 \ "onedrive"
6370 12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
6371 \ "swift"
6372 13 / SSH/SFTP Connection
6373 \ "sftp"
6374 14 / Yandex Disk
6375 \ "yandex"
6376 Storage> 1
6377 Amazon Application Client Id - required.
6378 client_id> your client ID goes here
6379 Amazon Application Client Secret - required.
6380 client_secret> your client secret goes here
6381 Auth server URL - leave blank to use Amazon's.
6382 auth_url> Optional auth URL
6383 Token server url - leave blank to use Amazon's.
6384 token_url> Optional token URL
6385 Remote config
6386 Make sure your Redirect URL is set to "http://127.0.0.1:53682/" in your custom config.
6387 Use auto config?
6388 * Say Y if not sure
6389 * Say N if you are working on a remote or headless machine
6390 y) Yes
6391 n) No
6392 y/n> y
6393 If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
6394 Log in and authorize rclone for access
6395 Waiting for code...
6396 Got code
6397 --------------------
6398 [remote]
6399 client_id = your client ID goes here
6400 client_secret = your client secret goes here
6401 auth_url = Optional auth URL
6402 token_url = Optional token URL
6403 token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","refresh_token":"xxxxxxxxxxxxxxxxxx","expiry":"2015-09-06T16:07:39.658438471+01:00"}
6404 --------------------
6405 y) Yes this is OK
6406 e) Edit this remote
6407 d) Delete this remote
6408 y/e/d> y
6409
6410 See the remote setup docs (https://rclone.org/remote_setup/) for how to
6411 set it up on a machine with no Internet browser available.
6412
6413 Note that rclone runs a webserver on your local machine to collect the
6414 token as returned from Amazon. This only runs from the moment it opens
6415 your browser to the moment you get back the verification code. This is
6416 on http://127.0.0.1:53682/ and this it may require you to unblock it
6417 temporarily if you are running a host firewall.
6418
6419 Once configured you can then use rclone like this,
6420
6421 List directories in top level of your Amazon Drive
6422
6423 rclone lsd remote:
6424
6425 List all the files in your Amazon Drive
6426
6427 rclone ls remote:
6428
6429 To copy a local directory to an Amazon Drive directory called backup
6430
6431 rclone copy /home/source remote:backup
6432
6433 Modified time and MD5SUMs
6434 Amazon Drive doesn't allow modification times to be changed via the API
6435 so these won't be accurate or used for syncing.
6436
6437 It does store MD5SUMs so for a more accurate sync, you can use the
6438 --checksum flag.
6439
6440 Deleting files
6441 Any files you delete with rclone will end up in the trash. Amazon
6442 don't provide an API to permanently delete files, nor to empty the
6443 trash, so you will have to do that with one of Amazon's apps or via the
6444 Amazon Drive website. As of November 17, 2016, files are automatically
6445 deleted by Amazon from the trash after 30 days.
6446
6447 Using with non .com Amazon accounts
6448 Let's say you usually use amazon.co.uk. When you authenticate with
6449 rclone it will take you to an amazon.com page to log in. Your ama‐
6450 zon.co.uk email and password should work here just fine.
6451
6452 Standard Options
6453 Here are the standard options specific to amazon cloud drive (Amazon
6454 Drive).
6455
6456 –acd-client-id
6457 Amazon Application Client ID.
6458
6459 · Config: client_id
6460
6461 · Env Var: RCLONE_ACD_CLIENT_ID
6462
6463 · Type: string
6464
6465 · Default: ""
6466
6467 –acd-client-secret
6468 Amazon Application Client Secret.
6469
6470 · Config: client_secret
6471
6472 · Env Var: RCLONE_ACD_CLIENT_SECRET
6473
6474 · Type: string
6475
6476 · Default: ""
6477
6478 Advanced Options
6479 Here are the advanced options specific to amazon cloud drive (Amazon
6480 Drive).
6481
6482 –acd-auth-url
6483 Auth server URL. Leave blank to use Amazon's.
6484
6485 · Config: auth_url
6486
6487 · Env Var: RCLONE_ACD_AUTH_URL
6488
6489 · Type: string
6490
6491 · Default: ""
6492
6493 –acd-token-url
6494 Token server url. leave blank to use Amazon's.
6495
6496 · Config: token_url
6497
6498 · Env Var: RCLONE_ACD_TOKEN_URL
6499
6500 · Type: string
6501
6502 · Default: ""
6503
6504 –acd-checkpoint
6505 Checkpoint for internal polling (debug).
6506
6507 · Config: checkpoint
6508
6509 · Env Var: RCLONE_ACD_CHECKPOINT
6510
6511 · Type: string
6512
6513 · Default: ""
6514
6515 –acd-upload-wait-per-gb
6516 Additional time per GB to wait after a failed complete upload to see if
6517 it appears.
6518
6519 Sometimes Amazon Drive gives an error when a file has been fully up‐
6520 loaded but the file appears anyway after a little while. This happens
6521 sometimes for files over 1GB in size and nearly every time for files
6522 bigger than 10GB. This parameter controls the time rclone waits for
6523 the file to appear.
6524
6525 The default value for this parameter is 3 minutes per GB, so by default
6526 it will wait 3 minutes for every GB uploaded to see if the file ap‐
6527 pears.
6528
6529 You can disable this feature by setting it to 0. This may cause con‐
6530 flict errors as rclone retries the failed upload but the file will most
6531 likely appear correctly eventually.
6532
6533 These values were determined empirically by observing lots of uploads
6534 of big files for a range of file sizes.
6535
6536 Upload with the “-v” flag to see more info about what rclone is doing
6537 in this situation.
6538
6539 · Config: upload_wait_per_gb
6540
6541 · Env Var: RCLONE_ACD_UPLOAD_WAIT_PER_GB
6542
6543 · Type: Duration
6544
6545 · Default: 3m0s
6546
6547 –acd-templink-threshold
6548 Files >= this size will be downloaded via their tempLink.
6549
6550 Files this size or more will be downloaded via their “tempLink”. This
6551 is to work around a problem with Amazon Drive which blocks downloads of
6552 files bigger than about 10GB. The default for this is 9GB which
6553 shouldn't need to be changed.
6554
6555 To download files above this threshold, rclone requests a “tempLink”
6556 which downloads the file through a temporary URL directly from the un‐
6557 derlying S3 storage.
6558
6559 · Config: templink_threshold
6560
6561 · Env Var: RCLONE_ACD_TEMPLINK_THRESHOLD
6562
6563 · Type: SizeSuffix
6564
6565 · Default: 9G
6566
6567 Limitations
6568 Note that Amazon Drive is case insensitive so you can't have a file
6569 called “Hello.doc” and one called “hello.doc”.
6570
6571 Amazon Drive has rate limiting so you may notice errors in the sync
6572 (429 errors). rclone will automatically retry the sync up to 3 times
6573 by default (see --retries flag) which should hopefully work around this
6574 problem.
6575
6576 Amazon Drive has an internal limit of file sizes that can be uploaded
6577 to the service. This limit is not officially published, but all files
6578 larger than this will fail.
6579
6580 At the time of writing (Jan 2016) is in the area of 50GB per file.
6581 This means that larger files are likely to fail.
6582
6583 Unfortunately there is no way for rclone to see that this failure is
6584 because of file size, so it will retry the operation, as any other
6585 failure. To avoid this problem, use --max-size 50000M option to limit
6586 the maximum size of uploaded files. Note that --max-size does not
6587 split files into segments, it only ignores files over this size.
6588
6589 Amazon S3 Storage Providers
6590 The S3 backend can be used with a number of different providers:
6591
6592 · AWS S3
6593
6594 · Alibaba Cloud (Aliyun) Object Storage System (OSS)
6595
6596 · Ceph
6597
6598 · DigitalOcean Spaces
6599
6600 · Dreamhost
6601
6602 · IBM COS S3
6603
6604 · Minio
6605
6606 · Wasabi
6607
6608 Paths are specified as remote:bucket (or remote: for the lsd command.)
6609 You may put subdirectories in too, eg remote:bucket/path/to/dir.
6610
6611 Once you have made a remote (see the provider specific section above)
6612 you can use it like this:
6613
6614 See all buckets
6615
6616 rclone lsd remote:
6617
6618 Make a new bucket
6619
6620 rclone mkdir remote:bucket
6621
6622 List the contents of a bucket
6623
6624 rclone ls remote:bucket
6625
6626 Sync /home/local/directory to the remote bucket, deleting any excess
6627 files in the bucket.
6628
6629 rclone sync /home/local/directory remote:bucket
6630
6631 AWS S3
6632 Here is an example of making an s3 configuration. First run
6633
6634 rclone config
6635
6636 This will guide you through an interactive setup process.
6637
6638 No remotes found - make a new one
6639 n) New remote
6640 s) Set configuration password
6641 q) Quit config
6642 n/s/q> n
6643 name> remote
6644 Type of storage to configure.
6645 Choose a number from below, or type in your own value
6646 1 / Alias for a existing remote
6647 \ "alias"
6648 2 / Amazon Drive
6649 \ "amazon cloud drive"
6650 3 / Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio)
6651 \ "s3"
6652 4 / Backblaze B2
6653 \ "b2"
6654 [snip]
6655 23 / http Connection
6656 \ "http"
6657 Storage> s3
6658 Choose your S3 provider.
6659 Choose a number from below, or type in your own value
6660 1 / Amazon Web Services (AWS) S3
6661 \ "AWS"
6662 2 / Ceph Object Storage
6663 \ "Ceph"
6664 3 / Digital Ocean Spaces
6665 \ "DigitalOcean"
6666 4 / Dreamhost DreamObjects
6667 \ "Dreamhost"
6668 5 / IBM COS S3
6669 \ "IBMCOS"
6670 6 / Minio Object Storage
6671 \ "Minio"
6672 7 / Wasabi Object Storage
6673 \ "Wasabi"
6674 8 / Any other S3 compatible provider
6675 \ "Other"
6676 provider> 1
6677 Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
6678 Choose a number from below, or type in your own value
6679 1 / Enter AWS credentials in the next step
6680 \ "false"
6681 2 / Get AWS credentials from the environment (env vars or IAM)
6682 \ "true"
6683 env_auth> 1
6684 AWS Access Key ID - leave blank for anonymous access or runtime credentials.
6685 access_key_id> XXX
6686 AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
6687 secret_access_key> YYY
6688 Region to connect to.
6689 Choose a number from below, or type in your own value
6690 / The default endpoint - a good choice if you are unsure.
6691 1 | US Region, Northern Virginia or Pacific Northwest.
6692 | Leave location constraint empty.
6693 \ "us-east-1"
6694 / US East (Ohio) Region
6695 2 | Needs location constraint us-east-2.
6696 \ "us-east-2"
6697 / US West (Oregon) Region
6698 3 | Needs location constraint us-west-2.
6699 \ "us-west-2"
6700 / US West (Northern California) Region
6701 4 | Needs location constraint us-west-1.
6702 \ "us-west-1"
6703 / Canada (Central) Region
6704 5 | Needs location constraint ca-central-1.
6705 \ "ca-central-1"
6706 / EU (Ireland) Region
6707 6 | Needs location constraint EU or eu-west-1.
6708 \ "eu-west-1"
6709 / EU (London) Region
6710 7 | Needs location constraint eu-west-2.
6711 \ "eu-west-2"
6712 / EU (Frankfurt) Region
6713 8 | Needs location constraint eu-central-1.
6714 \ "eu-central-1"
6715 / Asia Pacific (Singapore) Region
6716 9 | Needs location constraint ap-southeast-1.
6717 \ "ap-southeast-1"
6718 / Asia Pacific (Sydney) Region
6719 10 | Needs location constraint ap-southeast-2.
6720 \ "ap-southeast-2"
6721 / Asia Pacific (Tokyo) Region
6722 11 | Needs location constraint ap-northeast-1.
6723 \ "ap-northeast-1"
6724 / Asia Pacific (Seoul)
6725 12 | Needs location constraint ap-northeast-2.
6726 \ "ap-northeast-2"
6727 / Asia Pacific (Mumbai)
6728 13 | Needs location constraint ap-south-1.
6729 \ "ap-south-1"
6730 / South America (Sao Paulo) Region
6731 14 | Needs location constraint sa-east-1.
6732 \ "sa-east-1"
6733 region> 1
6734 Endpoint for S3 API.
6735 Leave blank if using AWS to use the default endpoint for the region.
6736 endpoint>
6737 Location constraint - must be set to match the Region. Used when creating buckets only.
6738 Choose a number from below, or type in your own value
6739 1 / Empty for US Region, Northern Virginia or Pacific Northwest.
6740 \ ""
6741 2 / US East (Ohio) Region.
6742 \ "us-east-2"
6743 3 / US West (Oregon) Region.
6744 \ "us-west-2"
6745 4 / US West (Northern California) Region.
6746 \ "us-west-1"
6747 5 / Canada (Central) Region.
6748 \ "ca-central-1"
6749 6 / EU (Ireland) Region.
6750 \ "eu-west-1"
6751 7 / EU (London) Region.
6752 \ "eu-west-2"
6753 8 / EU Region.
6754 \ "EU"
6755 9 / Asia Pacific (Singapore) Region.
6756 \ "ap-southeast-1"
6757 10 / Asia Pacific (Sydney) Region.
6758 \ "ap-southeast-2"
6759 11 / Asia Pacific (Tokyo) Region.
6760 \ "ap-northeast-1"
6761 12 / Asia Pacific (Seoul)
6762 \ "ap-northeast-2"
6763 13 / Asia Pacific (Mumbai)
6764 \ "ap-south-1"
6765 14 / South America (Sao Paulo) Region.
6766 \ "sa-east-1"
6767 location_constraint> 1
6768 Canned ACL used when creating buckets and/or storing objects in S3.
6769 For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
6770 Choose a number from below, or type in your own value
6771 1 / Owner gets FULL_CONTROL. No one else has access rights (default).
6772 \ "private"
6773 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
6774 \ "public-read"
6775 / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
6776 3 | Granting this on a bucket is generally not recommended.
6777 \ "public-read-write"
6778 4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
6779 \ "authenticated-read"
6780 / Object owner gets FULL_CONTROL. Bucket owner gets READ access.
6781 5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
6782 \ "bucket-owner-read"
6783 / Both the object owner and the bucket owner get FULL_CONTROL over the object.
6784 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
6785 \ "bucket-owner-full-control"
6786 acl> 1
6787 The server-side encryption algorithm used when storing this object in S3.
6788 Choose a number from below, or type in your own value
6789 1 / None
6790 \ ""
6791 2 / AES256
6792 \ "AES256"
6793 server_side_encryption> 1
6794 The storage class to use when storing objects in S3.
6795 Choose a number from below, or type in your own value
6796 1 / Default
6797 \ ""
6798 2 / Standard storage class
6799 \ "STANDARD"
6800 3 / Reduced redundancy storage class
6801 \ "REDUCED_REDUNDANCY"
6802 4 / Standard Infrequent Access storage class
6803 \ "STANDARD_IA"
6804 5 / One Zone Infrequent Access storage class
6805 \ "ONEZONE_IA"
6806 6 / Glacier storage class
6807 \ "GLACIER"
6808 7 / Glacier Deep Archive storage class
6809 \ "DEEP_ARCHIVE"
6810 storage_class> 1
6811 Remote config
6812 --------------------
6813 [remote]
6814 type = s3
6815 provider = AWS
6816 env_auth = false
6817 access_key_id = XXX
6818 secret_access_key = YYY
6819 region = us-east-1
6820 endpoint =
6821 location_constraint =
6822 acl = private
6823 server_side_encryption =
6824 storage_class =
6825 --------------------
6826 y) Yes this is OK
6827 e) Edit this remote
6828 d) Delete this remote
6829 y/e/d>
6830
6831 –fast-list
6832 This remote supports --fast-list which allows you to use fewer transac‐
6833 tions in exchange for more memory. See the rclone docs (/docs/#fast-
6834 list) for more details.
6835
6836 –update and –use-server-modtime
6837 As noted below, the modified time is stored on metadata on the object.
6838 It is used by default for all operations that require checking the time
6839 a file was last updated. It allows rclone to treat the remote more
6840 like a true filesystem, but it is inefficient because it requires an
6841 extra API call to retrieve the metadata.
6842
6843 For many operations, the time the object was last uploaded to the re‐
6844 mote is sufficient to determine if it is “dirty”. By using --update
6845 along with --use-server-modtime, you can avoid the extra API call and
6846 simply upload files whose local modtime is newer than the time it was
6847 last uploaded.
6848
6849 Modified time
6850 The modified time is stored as metadata on the object as
6851 X-Amz-Meta-Mtime as floating point since the epoch accurate to 1 ns.
6852
6853 Multipart uploads
6854 rclone supports multipart uploads with S3 which means that it can up‐
6855 load files bigger than 5GB.
6856
6857 Note that files uploaded both with multipart upload and through crypt
6858 remotes do not have MD5 sums.
6859
6860 rclone switches from single part uploads to multipart uploads at the
6861 point specified by --s3-upload-cutoff. This can be a maximum of 5GB
6862 and a minimum of 0 (ie always upload multipart files).
6863
6864 The chunk sizes used in the multipart upload are specified by
6865 --s3-chunk-size and the number of chunks uploaded concurrently is spec‐
6866 ified by --s3-upload-concurrency.
6867
6868 Multipart uploads will use --transfers * --s3-upload-concurrency *
6869 --s3-chunk-size extra memory. Single part uploads to not use extra
6870 memory.
6871
6872 Single part transfers can be faster than multipart transfers or slower
6873 depending on your latency from S3 - the more latency, the more likely
6874 single part transfers will be faster.
6875
6876 Increasing --s3-upload-concurrency will increase throughput (8 would be
6877 a sensible value) and increasing --s3-chunk-size also increases
6878 throughput (16M would be sensible). Increasing either of these will
6879 use more memory. The default values are high enough to gain most of
6880 the possible performance without using too much memory.
6881
6882 Buckets and Regions
6883 With Amazon S3 you can list buckets (rclone lsd) using any region, but
6884 you can only access the content of a bucket from the region it was cre‐
6885 ated in. If you attempt to access a bucket from the wrong region, you
6886 will get an error, incorrect region, the bucket is not in 'XXX' region.
6887
6888 Authentication
6889 There are a number of ways to supply rclone with a set of AWS creden‐
6890 tials, with and without using the environment.
6891
6892 The different authentication methods are tried in this order:
6893
6894 · Directly in the rclone configuration file (env_auth = false in the
6895 config file):
6896
6897 · access_key_id and secret_access_key are required.
6898
6899 · session_token can be optionally set when using AWS STS.
6900
6901 · Runtime configuration (env_auth = true in the config file):
6902
6903 · Export the following environment variables before running rclone:
6904
6905 · Access Key ID: AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY
6906
6907 · Secret Access Key: AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY
6908
6909 · Session Token: AWS_SESSION_TOKEN (optional)
6910
6911 · Or, use a named profile (https://docs.aws.amazon.com/cli/lat‐
6912 est/userguide/cli-multiple-profiles.html):
6913
6914 · Profile files are standard files used by AWS CLI tools
6915
6916 · By default it will use the profile in your home directory (eg
6917 ~/.aws/credentials on unix based systems) file and the “default”
6918 profile, to change set these environment variables:
6919
6920 · AWS_SHARED_CREDENTIALS_FILE to control which file.
6921
6922 · AWS_PROFILE to control which profile to use.
6923
6924 · Or, run rclone in an ECS task with an IAM role (AWS only).
6925
6926 · Or, run rclone on an EC2 instance with an IAM role (AWS only).
6927
6928 If none of these option actually end up providing rclone with AWS cre‐
6929 dentials then S3 interaction will be non-authenticated (see below).
6930
6931 S3 Permissions
6932 When using the sync subcommand of rclone the following minimum permis‐
6933 sions are required to be available on the bucket being written to:
6934
6935 · ListBucket
6936
6937 · DeleteObject
6938
6939 · GetObject
6940
6941 · PutObject
6942
6943 · PutObjectACL
6944
6945 Example policy:
6946
6947 {
6948 "Version": "2012-10-17",
6949 "Statement": [
6950 {
6951 "Effect": "Allow",
6952 "Principal": {
6953 "AWS": "arn:aws:iam::USER_SID:user/USER_NAME"
6954 },
6955 "Action": [
6956 "s3:ListBucket",
6957 "s3:DeleteObject",
6958 "s3:GetObject",
6959 "s3:PutObject",
6960 "s3:PutObjectAcl"
6961 ],
6962 "Resource": [
6963 "arn:aws:s3:::BUCKET_NAME/*",
6964 "arn:aws:s3:::BUCKET_NAME"
6965 ]
6966 }
6967 ]
6968 }
6969
6970 Notes on above:
6971
6972 1. This is a policy that can be used when creating bucket. It assumes
6973 that USER_NAME has been created.
6974
6975 2. The Resource entry must include both resource ARNs, as one implies
6976 the bucket and the other implies the bucket's objects.
6977
6978 For reference, here's an Ansible script
6979 (https://gist.github.com/ebridges/ebfc9042dd7c756cd101cfa807b7ae2b)
6980 that will generate one or more buckets that will work with rclone sync.
6981
6982 Key Management System (KMS)
6983 If you are using server side encryption with KMS then you will find you
6984 can't transfer small objects. As a work-around you can use the --ig‐
6985 nore-checksum flag.
6986
6987 A proper fix is being worked on in issue #1824
6988 (https://github.com/ncw/rclone/issues/1824).
6989
6990 Glacier and Glacier Deep Archive
6991 You can upload objects using the glacier storage class or transition
6992 them to glacier using a lifecycle policy (http://docs.aws.ama‐
6993 zon.com/AmazonS3/latest/user-guide/create-lifecycle.html). The bucket
6994 can still be synced or copied into normally, but if rclone tries to ac‐
6995 cess data from the glacier storage class you will see an error like be‐
6996 low.
6997
6998 2017/09/11 19:07:43 Failed to sync: failed to open source object: Object in GLACIER, restore first: path/to/file
6999
7000 In this case you need to restore (http://docs.aws.amazon.com/Ama‐
7001 zonS3/latest/user-guide/restore-archived-objects.html) the object(s) in
7002 question before using rclone.
7003
7004 Standard Options
7005 Here are the standard options specific to s3 (Amazon S3 Compliant Stor‐
7006 age Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS,
7007 Minio, etc)).
7008
7009 –s3-provider
7010 Choose your S3 provider.
7011
7012 · Config: provider
7013
7014 · Env Var: RCLONE_S3_PROVIDER
7015
7016 · Type: string
7017
7018 · Default: ""
7019
7020 · Examples:
7021
7022 · “AWS”
7023
7024 · Amazon Web Services (AWS) S3
7025
7026 · “Alibaba”
7027
7028 · Alibaba Cloud Object Storage System (OSS) formerly Aliyun
7029
7030 · “Ceph”
7031
7032 · Ceph Object Storage
7033
7034 · “DigitalOcean”
7035
7036 · Digital Ocean Spaces
7037
7038 · “Dreamhost”
7039
7040 · Dreamhost DreamObjects
7041
7042 · “IBMCOS”
7043
7044 · IBM COS S3
7045
7046 · “Minio”
7047
7048 · Minio Object Storage
7049
7050 · “Netease”
7051
7052 · Netease Object Storage (NOS)
7053
7054 · “Wasabi”
7055
7056 · Wasabi Object Storage
7057
7058 · “Other”
7059
7060 · Any other S3 compatible provider
7061
7062 –s3-env-auth
7063 Get AWS credentials from runtime (environment variables or EC2/ECS meta
7064 data if no env vars). Only applies if access_key_id and secret_ac‐
7065 cess_key is blank.
7066
7067 · Config: env_auth
7068
7069 · Env Var: RCLONE_S3_ENV_AUTH
7070
7071 · Type: bool
7072
7073 · Default: false
7074
7075 · Examples:
7076
7077 · “false”
7078
7079 · Enter AWS credentials in the next step
7080
7081 · “true”
7082
7083 · Get AWS credentials from the environment (env vars or IAM)
7084
7085 –s3-access-key-id
7086 AWS Access Key ID. Leave blank for anonymous access or runtime creden‐
7087 tials.
7088
7089 · Config: access_key_id
7090
7091 · Env Var: RCLONE_S3_ACCESS_KEY_ID
7092
7093 · Type: string
7094
7095 · Default: ""
7096
7097 –s3-secret-access-key
7098 AWS Secret Access Key (password) Leave blank for anonymous access or
7099 runtime credentials.
7100
7101 · Config: secret_access_key
7102
7103 · Env Var: RCLONE_S3_SECRET_ACCESS_KEY
7104
7105 · Type: string
7106
7107 · Default: ""
7108
7109 –s3-region
7110 Region to connect to.
7111
7112 · Config: region
7113
7114 · Env Var: RCLONE_S3_REGION
7115
7116 · Type: string
7117
7118 · Default: ""
7119
7120 · Examples:
7121
7122 · “us-east-1”
7123
7124 · The default endpoint - a good choice if you are unsure.
7125
7126 · US Region, Northern Virginia or Pacific Northwest.
7127
7128 · Leave location constraint empty.
7129
7130 · “us-east-2”
7131
7132 · US East (Ohio) Region
7133
7134 · Needs location constraint us-east-2.
7135
7136 · “us-west-2”
7137
7138 · US West (Oregon) Region
7139
7140 · Needs location constraint us-west-2.
7141
7142 · “us-west-1”
7143
7144 · US West (Northern California) Region
7145
7146 · Needs location constraint us-west-1.
7147
7148 · “ca-central-1”
7149
7150 · Canada (Central) Region
7151
7152 · Needs location constraint ca-central-1.
7153
7154 · “eu-west-1”
7155
7156 · EU (Ireland) Region
7157
7158 · Needs location constraint EU or eu-west-1.
7159
7160 · “eu-west-2”
7161
7162 · EU (London) Region
7163
7164 · Needs location constraint eu-west-2.
7165
7166 · “eu-north-1”
7167
7168 · EU (Stockholm) Region
7169
7170 · Needs location constraint eu-north-1.
7171
7172 · “eu-central-1”
7173
7174 · EU (Frankfurt) Region
7175
7176 · Needs location constraint eu-central-1.
7177
7178 · “ap-southeast-1”
7179
7180 · Asia Pacific (Singapore) Region
7181
7182 · Needs location constraint ap-southeast-1.
7183
7184 · “ap-southeast-2”
7185
7186 · Asia Pacific (Sydney) Region
7187
7188 · Needs location constraint ap-southeast-2.
7189
7190 · “ap-northeast-1”
7191
7192 · Asia Pacific (Tokyo) Region
7193
7194 · Needs location constraint ap-northeast-1.
7195
7196 · “ap-northeast-2”
7197
7198 · Asia Pacific (Seoul)
7199
7200 · Needs location constraint ap-northeast-2.
7201
7202 · “ap-south-1”
7203
7204 · Asia Pacific (Mumbai)
7205
7206 · Needs location constraint ap-south-1.
7207
7208 · “sa-east-1”
7209
7210 · South America (Sao Paulo) Region
7211
7212 · Needs location constraint sa-east-1.
7213
7214 –s3-region
7215 Region to connect to. Leave blank if you are using an S3 clone and you
7216 don't have a region.
7217
7218 · Config: region
7219
7220 · Env Var: RCLONE_S3_REGION
7221
7222 · Type: string
7223
7224 · Default: ""
7225
7226 · Examples:
7227
7228 · ""
7229
7230 · Use this if unsure. Will use v4 signatures and an empty region.
7231
7232 · “other-v2-signature”
7233
7234 · Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH.
7235
7236 –s3-endpoint
7237 Endpoint for S3 API. Leave blank if using AWS to use the default end‐
7238 point for the region.
7239
7240 · Config: endpoint
7241
7242 · Env Var: RCLONE_S3_ENDPOINT
7243
7244 · Type: string
7245
7246 · Default: ""
7247
7248 –s3-endpoint
7249 Endpoint for IBM COS S3 API. Specify if using an IBM COS On Premise.
7250
7251 · Config: endpoint
7252
7253 · Env Var: RCLONE_S3_ENDPOINT
7254
7255 · Type: string
7256
7257 · Default: ""
7258
7259 · Examples:
7260
7261 · “s3-api.us-geo.objectstorage.softlayer.net”
7262
7263 · US Cross Region Endpoint
7264
7265 · “s3-api.dal.us-geo.objectstorage.softlayer.net”
7266
7267 · US Cross Region Dallas Endpoint
7268
7269 · “s3-api.wdc-us-geo.objectstorage.softlayer.net”
7270
7271 · US Cross Region Washington DC Endpoint
7272
7273 · “s3-api.sjc-us-geo.objectstorage.softlayer.net”
7274
7275 · US Cross Region San Jose Endpoint
7276
7277 · “s3-api.us-geo.objectstorage.service.networklayer.com”
7278
7279 · US Cross Region Private Endpoint
7280
7281 · “s3-api.dal-us-geo.objectstorage.service.networklayer.com”
7282
7283 · US Cross Region Dallas Private Endpoint
7284
7285 · “s3-api.wdc-us-geo.objectstorage.service.networklayer.com”
7286
7287 · US Cross Region Washington DC Private Endpoint
7288
7289 · “s3-api.sjc-us-geo.objectstorage.service.networklayer.com”
7290
7291 · US Cross Region San Jose Private Endpoint
7292
7293 · “s3.us-east.objectstorage.softlayer.net”
7294
7295 · US Region East Endpoint
7296
7297 · “s3.us-east.objectstorage.service.networklayer.com”
7298
7299 · US Region East Private Endpoint
7300
7301 · “s3.us-south.objectstorage.softlayer.net”
7302
7303 · US Region South Endpoint
7304
7305 · “s3.us-south.objectstorage.service.networklayer.com”
7306
7307 · US Region South Private Endpoint
7308
7309 · “s3.eu-geo.objectstorage.softlayer.net”
7310
7311 · EU Cross Region Endpoint
7312
7313 · “s3.fra-eu-geo.objectstorage.softlayer.net”
7314
7315 · EU Cross Region Frankfurt Endpoint
7316
7317 · “s3.mil-eu-geo.objectstorage.softlayer.net”
7318
7319 · EU Cross Region Milan Endpoint
7320
7321 · “s3.ams-eu-geo.objectstorage.softlayer.net”
7322
7323 · EU Cross Region Amsterdam Endpoint
7324
7325 · “s3.eu-geo.objectstorage.service.networklayer.com”
7326
7327 · EU Cross Region Private Endpoint
7328
7329 · “s3.fra-eu-geo.objectstorage.service.networklayer.com”
7330
7331 · EU Cross Region Frankfurt Private Endpoint
7332
7333 · “s3.mil-eu-geo.objectstorage.service.networklayer.com”
7334
7335 · EU Cross Region Milan Private Endpoint
7336
7337 · “s3.ams-eu-geo.objectstorage.service.networklayer.com”
7338
7339 · EU Cross Region Amsterdam Private Endpoint
7340
7341 · “s3.eu-gb.objectstorage.softlayer.net”
7342
7343 · Great Britain Endpoint
7344
7345 · “s3.eu-gb.objectstorage.service.networklayer.com”
7346
7347 · Great Britain Private Endpoint
7348
7349 · “s3.ap-geo.objectstorage.softlayer.net”
7350
7351 · APAC Cross Regional Endpoint
7352
7353 · “s3.tok-ap-geo.objectstorage.softlayer.net”
7354
7355 · APAC Cross Regional Tokyo Endpoint
7356
7357 · “s3.hkg-ap-geo.objectstorage.softlayer.net”
7358
7359 · APAC Cross Regional HongKong Endpoint
7360
7361 · “s3.seo-ap-geo.objectstorage.softlayer.net”
7362
7363 · APAC Cross Regional Seoul Endpoint
7364
7365 · “s3.ap-geo.objectstorage.service.networklayer.com”
7366
7367 · APAC Cross Regional Private Endpoint
7368
7369 · “s3.tok-ap-geo.objectstorage.service.networklayer.com”
7370
7371 · APAC Cross Regional Tokyo Private Endpoint
7372
7373 · “s3.hkg-ap-geo.objectstorage.service.networklayer.com”
7374
7375 · APAC Cross Regional HongKong Private Endpoint
7376
7377 · “s3.seo-ap-geo.objectstorage.service.networklayer.com”
7378
7379 · APAC Cross Regional Seoul Private Endpoint
7380
7381 · “s3.mel01.objectstorage.softlayer.net”
7382
7383 · Melbourne Single Site Endpoint
7384
7385 · “s3.mel01.objectstorage.service.networklayer.com”
7386
7387 · Melbourne Single Site Private Endpoint
7388
7389 · “s3.tor01.objectstorage.softlayer.net”
7390
7391 · Toronto Single Site Endpoint
7392
7393 · “s3.tor01.objectstorage.service.networklayer.com”
7394
7395 · Toronto Single Site Private Endpoint
7396
7397 –s3-endpoint
7398 Endpoint for OSS API.
7399
7400 · Config: endpoint
7401
7402 · Env Var: RCLONE_S3_ENDPOINT
7403
7404 · Type: string
7405
7406 · Default: ""
7407
7408 · Examples:
7409
7410 · “oss-cn-hangzhou.aliyuncs.com”
7411
7412 · East China 1 (Hangzhou)
7413
7414 · “oss-cn-shanghai.aliyuncs.com”
7415
7416 · East China 2 (Shanghai)
7417
7418 · “oss-cn-qingdao.aliyuncs.com”
7419
7420 · North China 1 (Qingdao)
7421
7422 · “oss-cn-beijing.aliyuncs.com”
7423
7424 · North China 2 (Beijing)
7425
7426 · “oss-cn-zhangjiakou.aliyuncs.com”
7427
7428 · North China 3 (Zhangjiakou)
7429
7430 · “oss-cn-huhehaote.aliyuncs.com”
7431
7432 · North China 5 (Huhehaote)
7433
7434 · “oss-cn-shenzhen.aliyuncs.com”
7435
7436 · South China 1 (Shenzhen)
7437
7438 · “oss-cn-hongkong.aliyuncs.com”
7439
7440 · Hong Kong (Hong Kong)
7441
7442 · “oss-us-west-1.aliyuncs.com”
7443
7444 · US West 1 (Silicon Valley)
7445
7446 · “oss-us-east-1.aliyuncs.com”
7447
7448 · US East 1 (Virginia)
7449
7450 · “oss-ap-southeast-1.aliyuncs.com”
7451
7452 · Southeast Asia Southeast 1 (Singapore)
7453
7454 · “oss-ap-southeast-2.aliyuncs.com”
7455
7456 · Asia Pacific Southeast 2 (Sydney)
7457
7458 · “oss-ap-southeast-3.aliyuncs.com”
7459
7460 · Southeast Asia Southeast 3 (Kuala Lumpur)
7461
7462 · “oss-ap-southeast-5.aliyuncs.com”
7463
7464 · Asia Pacific Southeast 5 (Jakarta)
7465
7466 · “oss-ap-northeast-1.aliyuncs.com”
7467
7468 · Asia Pacific Northeast 1 (Japan)
7469
7470 · “oss-ap-south-1.aliyuncs.com”
7471
7472 · Asia Pacific South 1 (Mumbai)
7473
7474 · “oss-eu-central-1.aliyuncs.com”
7475
7476 · Central Europe 1 (Frankfurt)
7477
7478 · “oss-eu-west-1.aliyuncs.com”
7479
7480 · West Europe (London)
7481
7482 · “oss-me-east-1.aliyuncs.com”
7483
7484 · Middle East 1 (Dubai)
7485
7486 –s3-endpoint
7487 Endpoint for S3 API. Required when using an S3 clone.
7488
7489 · Config: endpoint
7490
7491 · Env Var: RCLONE_S3_ENDPOINT
7492
7493 · Type: string
7494
7495 · Default: ""
7496
7497 · Examples:
7498
7499 · “objects-us-east-1.dream.io”
7500
7501 · Dream Objects endpoint
7502
7503 · “nyc3.digitaloceanspaces.com”
7504
7505 · Digital Ocean Spaces New York 3
7506
7507 · “ams3.digitaloceanspaces.com”
7508
7509 · Digital Ocean Spaces Amsterdam 3
7510
7511 · “sgp1.digitaloceanspaces.com”
7512
7513 · Digital Ocean Spaces Singapore 1
7514
7515 · “s3.wasabisys.com”
7516
7517 · Wasabi US East endpoint
7518
7519 · “s3.us-west-1.wasabisys.com”
7520
7521 · Wasabi US West endpoint
7522
7523 –s3-location-constraint
7524 Location constraint - must be set to match the Region. Used when cre‐
7525 ating buckets only.
7526
7527 · Config: location_constraint
7528
7529 · Env Var: RCLONE_S3_LOCATION_CONSTRAINT
7530
7531 · Type: string
7532
7533 · Default: ""
7534
7535 · Examples:
7536
7537 · ""
7538
7539 · Empty for US Region, Northern Virginia or Pacific Northwest.
7540
7541 · “us-east-2”
7542
7543 · US East (Ohio) Region.
7544
7545 · “us-west-2”
7546
7547 · US West (Oregon) Region.
7548
7549 · “us-west-1”
7550
7551 · US West (Northern California) Region.
7552
7553 · “ca-central-1”
7554
7555 · Canada (Central) Region.
7556
7557 · “eu-west-1”
7558
7559 · EU (Ireland) Region.
7560
7561 · “eu-west-2”
7562
7563 · EU (London) Region.
7564
7565 · “eu-north-1”
7566
7567 · EU (Stockholm) Region.
7568
7569 · “EU”
7570
7571 · EU Region.
7572
7573 · “ap-southeast-1”
7574
7575 · Asia Pacific (Singapore) Region.
7576
7577 · “ap-southeast-2”
7578
7579 · Asia Pacific (Sydney) Region.
7580
7581 · “ap-northeast-1”
7582
7583 · Asia Pacific (Tokyo) Region.
7584
7585 · “ap-northeast-2”
7586
7587 · Asia Pacific (Seoul)
7588
7589 · “ap-south-1”
7590
7591 · Asia Pacific (Mumbai)
7592
7593 · “sa-east-1”
7594
7595 · South America (Sao Paulo) Region.
7596
7597 –s3-location-constraint
7598 Location constraint - must match endpoint when using IBM Cloud Public.
7599 For on-prem COS, do not make a selection from this list, hit enter
7600
7601 · Config: location_constraint
7602
7603 · Env Var: RCLONE_S3_LOCATION_CONSTRAINT
7604
7605 · Type: string
7606
7607 · Default: ""
7608
7609 · Examples:
7610
7611 · “us-standard”
7612
7613 · US Cross Region Standard
7614
7615 · “us-vault”
7616
7617 · US Cross Region Vault
7618
7619 · “us-cold”
7620
7621 · US Cross Region Cold
7622
7623 · “us-flex”
7624
7625 · US Cross Region Flex
7626
7627 · “us-east-standard”
7628
7629 · US East Region Standard
7630
7631 · “us-east-vault”
7632
7633 · US East Region Vault
7634
7635 · “us-east-cold”
7636
7637 · US East Region Cold
7638
7639 · “us-east-flex”
7640
7641 · US East Region Flex
7642
7643 · “us-south-standard”
7644
7645 · US South Region Standard
7646
7647 · “us-south-vault”
7648
7649 · US South Region Vault
7650
7651 · “us-south-cold”
7652
7653 · US South Region Cold
7654
7655 · “us-south-flex”
7656
7657 · US South Region Flex
7658
7659 · “eu-standard”
7660
7661 · EU Cross Region Standard
7662
7663 · “eu-vault”
7664
7665 · EU Cross Region Vault
7666
7667 · “eu-cold”
7668
7669 · EU Cross Region Cold
7670
7671 · “eu-flex”
7672
7673 · EU Cross Region Flex
7674
7675 · “eu-gb-standard”
7676
7677 · Great Britain Standard
7678
7679 · “eu-gb-vault”
7680
7681 · Great Britain Vault
7682
7683 · “eu-gb-cold”
7684
7685 · Great Britain Cold
7686
7687 · “eu-gb-flex”
7688
7689 · Great Britain Flex
7690
7691 · “ap-standard”
7692
7693 · APAC Standard
7694
7695 · “ap-vault”
7696
7697 · APAC Vault
7698
7699 · “ap-cold”
7700
7701 · APAC Cold
7702
7703 · “ap-flex”
7704
7705 · APAC Flex
7706
7707 · “mel01-standard”
7708
7709 · Melbourne Standard
7710
7711 · “mel01-vault”
7712
7713 · Melbourne Vault
7714
7715 · “mel01-cold”
7716
7717 · Melbourne Cold
7718
7719 · “mel01-flex”
7720
7721 · Melbourne Flex
7722
7723 · “tor01-standard”
7724
7725 · Toronto Standard
7726
7727 · “tor01-vault”
7728
7729 · Toronto Vault
7730
7731 · “tor01-cold”
7732
7733 · Toronto Cold
7734
7735 · “tor01-flex”
7736
7737 · Toronto Flex
7738
7739 –s3-location-constraint
7740 Location constraint - must be set to match the Region. Leave blank if
7741 not sure. Used when creating buckets only.
7742
7743 · Config: location_constraint
7744
7745 · Env Var: RCLONE_S3_LOCATION_CONSTRAINT
7746
7747 · Type: string
7748
7749 · Default: ""
7750
7751 –s3-acl
7752 Canned ACL used when creating buckets and storing or copying objects.
7753
7754 This ACL is used for creating objects and if bucket_acl isn't set, for
7755 creating buckets too.
7756
7757 For more info visit https://docs.aws.amazon.com/AmazonS3/lat‐
7758 est/dev/acl-overview.html#canned-acl
7759
7760 Note that this ACL is applied when server side copying objects as S3
7761 doesn't copy the ACL from the source but rather writes a fresh one.
7762
7763 · Config: acl
7764
7765 · Env Var: RCLONE_S3_ACL
7766
7767 · Type: string
7768
7769 · Default: ""
7770
7771 · Examples:
7772
7773 · “private”
7774
7775 · Owner gets FULL_CONTROL. No one else has access rights (de‐
7776 fault).
7777
7778 · “public-read”
7779
7780 · Owner gets FULL_CONTROL. The AllUsers group gets READ access.
7781
7782 · “public-read-write”
7783
7784 · Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE
7785 access.
7786
7787 · Granting this on a bucket is generally not recommended.
7788
7789 · “authenticated-read”
7790
7791 · Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ
7792 access.
7793
7794 · “bucket-owner-read”
7795
7796 · Object owner gets FULL_CONTROL. Bucket owner gets READ access.
7797
7798 · If you specify this canned ACL when creating a bucket, Amazon S3
7799 ignores it.
7800
7801 · “bucket-owner-full-control”
7802
7803 · Both the object owner and the bucket owner get FULL_CONTROL over
7804 the object.
7805
7806 · If you specify this canned ACL when creating a bucket, Amazon S3
7807 ignores it.
7808
7809 · “private”
7810
7811 · Owner gets FULL_CONTROL. No one else has access rights (de‐
7812 fault). This acl is available on IBM Cloud (Infra), IBM Cloud
7813 (Storage), On-Premise COS
7814
7815 · “public-read”
7816
7817 · Owner gets FULL_CONTROL. The AllUsers group gets READ access.
7818 This acl is available on IBM Cloud (Infra), IBM Cloud (Storage),
7819 On-Premise IBM COS
7820
7821 · “public-read-write”
7822
7823 · Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE
7824 access. This acl is available on IBM Cloud (Infra), On-Premise
7825 IBM COS
7826
7827 · “authenticated-read”
7828
7829 · Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ
7830 access. Not supported on Buckets. This acl is available on IBM
7831 Cloud (Infra) and On-Premise IBM COS
7832
7833 –s3-server-side-encryption
7834 The server-side encryption algorithm used when storing this object in
7835 S3.
7836
7837 · Config: server_side_encryption
7838
7839 · Env Var: RCLONE_S3_SERVER_SIDE_ENCRYPTION
7840
7841 · Type: string
7842
7843 · Default: ""
7844
7845 · Examples:
7846
7847 · ""
7848
7849 · None
7850
7851 · “AES256”
7852
7853 · AES256
7854
7855 · “aws:kms”
7856
7857 · aws:kms
7858
7859 –s3-sse-kms-key-id
7860 If using KMS ID you must provide the ARN of Key.
7861
7862 · Config: sse_kms_key_id
7863
7864 · Env Var: RCLONE_S3_SSE_KMS_KEY_ID
7865
7866 · Type: string
7867
7868 · Default: ""
7869
7870 · Examples:
7871
7872 · ""
7873
7874 · None
7875
7876 · "arn:aws:kms:us-east-1:*"
7877
7878 · arn:aws:kms:*
7879
7880 –s3-storage-class
7881 The storage class to use when storing new objects in S3.
7882
7883 · Config: storage_class
7884
7885 · Env Var: RCLONE_S3_STORAGE_CLASS
7886
7887 · Type: string
7888
7889 · Default: ""
7890
7891 · Examples:
7892
7893 · ""
7894
7895 · Default
7896
7897 · “STANDARD”
7898
7899 · Standard storage class
7900
7901 · “REDUCED_REDUNDANCY”
7902
7903 · Reduced redundancy storage class
7904
7905 · “STANDARD_IA”
7906
7907 · Standard Infrequent Access storage class
7908
7909 · “ONEZONE_IA”
7910
7911 · One Zone Infrequent Access storage class
7912
7913 · “GLACIER”
7914
7915 · Glacier storage class
7916
7917 · “DEEP_ARCHIVE”
7918
7919 · Glacier Deep Archive storage class
7920
7921 –s3-storage-class
7922 The storage class to use when storing new objects in OSS.
7923
7924 · Config: storage_class
7925
7926 · Env Var: RCLONE_S3_STORAGE_CLASS
7927
7928 · Type: string
7929
7930 · Default: ""
7931
7932 · Examples:
7933
7934 · ""
7935
7936 · Default
7937
7938 · “STANDARD”
7939
7940 · Standard storage class
7941
7942 · “GLACIER”
7943
7944 · Archive storage mode.
7945
7946 · “STANDARD_IA”
7947
7948 · Infrequent access storage mode.
7949
7950 Advanced Options
7951 Here are the advanced options specific to s3 (Amazon S3 Compliant Stor‐
7952 age Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS,
7953 Minio, etc)).
7954
7955 –s3-bucket-acl
7956 Canned ACL used when creating buckets.
7957
7958 For more info visit https://docs.aws.amazon.com/AmazonS3/lat‐
7959 est/dev/acl-overview.html#canned-acl
7960
7961 Note that this ACL is applied when only when creating buckets. If it
7962 isn't set then “acl” is used instead.
7963
7964 · Config: bucket_acl
7965
7966 · Env Var: RCLONE_S3_BUCKET_ACL
7967
7968 · Type: string
7969
7970 · Default: ""
7971
7972 · Examples:
7973
7974 · “private”
7975
7976 · Owner gets FULL_CONTROL. No one else has access rights (de‐
7977 fault).
7978
7979 · “public-read”
7980
7981 · Owner gets FULL_CONTROL. The AllUsers group gets READ access.
7982
7983 · “public-read-write”
7984
7985 · Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE
7986 access.
7987
7988 · Granting this on a bucket is generally not recommended.
7989
7990 · “authenticated-read”
7991
7992 · Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ
7993 access.
7994
7995 –s3-upload-cutoff
7996 Cutoff for switching to chunked upload
7997
7998 Any files larger than this will be uploaded in chunks of chunk_size.
7999 The minimum is 0 and the maximum is 5GB.
8000
8001 · Config: upload_cutoff
8002
8003 · Env Var: RCLONE_S3_UPLOAD_CUTOFF
8004
8005 · Type: SizeSuffix
8006
8007 · Default: 200M
8008
8009 –s3-chunk-size
8010 Chunk size to use for uploading.
8011
8012 When uploading files larger than upload_cutoff they will be uploaded as
8013 multipart uploads using this chunk size.
8014
8015 Note that “–s3-upload-concurrency” chunks of this size are buffered in
8016 memory per transfer.
8017
8018 If you are transferring large files over high speed links and you have
8019 enough memory, then increasing this will speed up the transfers.
8020
8021 · Config: chunk_size
8022
8023 · Env Var: RCLONE_S3_CHUNK_SIZE
8024
8025 · Type: SizeSuffix
8026
8027 · Default: 5M
8028
8029 –s3-disable-checksum
8030 Don't store MD5 checksum with object metadata
8031
8032 · Config: disable_checksum
8033
8034 · Env Var: RCLONE_S3_DISABLE_CHECKSUM
8035
8036 · Type: bool
8037
8038 · Default: false
8039
8040 –s3-session-token
8041 An AWS session token
8042
8043 · Config: session_token
8044
8045 · Env Var: RCLONE_S3_SESSION_TOKEN
8046
8047 · Type: string
8048
8049 · Default: ""
8050
8051 –s3-upload-concurrency
8052 Concurrency for multipart uploads.
8053
8054 This is the number of chunks of the same file that are uploaded concur‐
8055 rently.
8056
8057 If you are uploading small numbers of large file over high speed link
8058 and these uploads do not fully utilize your bandwidth, then increasing
8059 this may help to speed up the transfers.
8060
8061 · Config: upload_concurrency
8062
8063 · Env Var: RCLONE_S3_UPLOAD_CONCURRENCY
8064
8065 · Type: int
8066
8067 · Default: 4
8068
8069 –s3-force-path-style
8070 If true use path style access if false use virtual hosted style.
8071
8072 If this is true (the default) then rclone will use path style access,
8073 if false then rclone will use virtual path style. See the AWS S3 docs
8074 (https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#ac‐
8075 cess-bucket-intro) for more info.
8076
8077 Some providers (eg Aliyun OSS or Netease COS) require this set to
8078 false.
8079
8080 · Config: force_path_style
8081
8082 · Env Var: RCLONE_S3_FORCE_PATH_STYLE
8083
8084 · Type: bool
8085
8086 · Default: true
8087
8088 –s3-v2-auth
8089 If true use v2 authentication.
8090
8091 If this is false (the default) then rclone will use v4 authentication.
8092 If it is set then rclone will use v2 authentication.
8093
8094 Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH.
8095
8096 · Config: v2_auth
8097
8098 · Env Var: RCLONE_S3_V2_AUTH
8099
8100 · Type: bool
8101
8102 · Default: false
8103
8104 Anonymous access to public buckets
8105 If you want to use rclone to access a public bucket, configure with a
8106 blank access_key_id and secret_access_key. Your config should end up
8107 looking like this:
8108
8109 [anons3]
8110 type = s3
8111 provider = AWS
8112 env_auth = false
8113 access_key_id =
8114 secret_access_key =
8115 region = us-east-1
8116 endpoint =
8117 location_constraint =
8118 acl = private
8119 server_side_encryption =
8120 storage_class =
8121
8122 Then use it as normal with the name of the public bucket, eg
8123
8124 rclone lsd anons3:1000genomes
8125
8126 You will be able to list and copy data but not upload it.
8127
8128 Ceph
8129 Ceph (https://ceph.com/) is an open source unified, distributed storage
8130 system designed for excellent performance, reliability and scalability.
8131 It has an S3 compatible object storage interface.
8132
8133 To use rclone with Ceph, configure as above but leave the region blank
8134 and set the endpoint. You should end up with something like this in
8135 your config:
8136
8137 [ceph]
8138 type = s3
8139 provider = Ceph
8140 env_auth = false
8141 access_key_id = XXX
8142 secret_access_key = YYY
8143 region =
8144 endpoint = https://ceph.endpoint.example.com
8145 location_constraint =
8146 acl =
8147 server_side_encryption =
8148 storage_class =
8149
8150 If you are using an older version of CEPH, eg 10.2.x Jewel, then you
8151 may need to supply the parameter --s3-upload-cutoff 0 or put this in
8152 the config file as upload_cutoff 0 to work around a bug which causes
8153 uploading of small files to fail.
8154
8155 Note also that Ceph sometimes puts / in the passwords it gives users.
8156 If you read the secret access key using the command line tools you will
8157 get a JSON blob with the / escaped as \/. Make sure you only write /
8158 in the secret access key.
8159
8160 Eg the dump from Ceph looks something like this (irrelevant keys re‐
8161 moved).
8162
8163 {
8164 "user_id": "xxx",
8165 "display_name": "xxxx",
8166 "keys": [
8167 {
8168 "user": "xxx",
8169 "access_key": "xxxxxx",
8170 "secret_key": "xxxxxx\/xxxx"
8171 }
8172 ],
8173 }
8174
8175 Because this is a json dump, it is encoding the / as \/, so if you use
8176 the secret key as xxxxxx/xxxx it will work fine.
8177
8178 Dreamhost
8179 Dreamhost DreamObjects (https://www.dreamhost.com/cloud/storage/) is an
8180 object storage system based on CEPH.
8181
8182 To use rclone with Dreamhost, configure as above but leave the region
8183 blank and set the endpoint. You should end up with something like this
8184 in your config:
8185
8186 [dreamobjects]
8187 type = s3
8188 provider = DreamHost
8189 env_auth = false
8190 access_key_id = your_access_key
8191 secret_access_key = your_secret_key
8192 region =
8193 endpoint = objects-us-west-1.dream.io
8194 location_constraint =
8195 acl = private
8196 server_side_encryption =
8197 storage_class =
8198
8199 DigitalOcean Spaces
8200 Spaces (https://www.digitalocean.com/products/object-storage/) is an
8201 S3-interoperable (https://developers.digitalocean.com/documenta‐
8202 tion/spaces/) object storage service from cloud provider DigitalOcean.
8203
8204 To connect to DigitalOcean Spaces you will need an access key and se‐
8205 cret key. These can be retrieved on the “Applications & API
8206 (https://cloud.digitalocean.com/settings/api/tokens)” page of the Digi‐
8207 talOcean control panel. They will be needed when promted by
8208 rclone config for your access_key_id and secret_access_key.
8209
8210 When prompted for a region or location_constraint, press enter to use
8211 the default value. The region must be included in the endpoint setting
8212 (e.g. nyc3.digitaloceanspaces.com). The default values can be used
8213 for other settings.
8214
8215 Going through the whole process of creating a new remote by running
8216 rclone config, each prompt should be answered as shown below:
8217
8218 Storage> s3
8219 env_auth> 1
8220 access_key_id> YOUR_ACCESS_KEY
8221 secret_access_key> YOUR_SECRET_KEY
8222 region>
8223 endpoint> nyc3.digitaloceanspaces.com
8224 location_constraint>
8225 acl>
8226 storage_class>
8227
8228 The resulting configuration file should look like:
8229
8230 [spaces]
8231 type = s3
8232 provider = DigitalOcean
8233 env_auth = false
8234 access_key_id = YOUR_ACCESS_KEY
8235 secret_access_key = YOUR_SECRET_KEY
8236 region =
8237 endpoint = nyc3.digitaloceanspaces.com
8238 location_constraint =
8239 acl =
8240 server_side_encryption =
8241 storage_class =
8242
8243 Once configured, you can create a new Space and begin copying files.
8244 For example:
8245
8246 rclone mkdir spaces:my-new-space
8247 rclone copy /path/to/files spaces:my-new-space
8248
8249 IBM COS (S3)
8250 Information stored with IBM Cloud Object Storage is encrypted and dis‐
8251 persed across multiple geographic locations, and accessed through an
8252 implementation of the S3 API. This service makes use of the distrib‐
8253 uted storage technologies provided by IBM's Cloud Object Storage System
8254 (formerly Cleversafe). For more information visit:
8255 (http://www.ibm.com/cloud/object-storage)
8256
8257 To configure access to IBM COS S3, follow the steps below:
8258
8259 1. Run rclone config and select n for a new remote.
8260
8261 2018/02/14 14:13:11 NOTICE: Config file "C:\\Users\\a\\.config\\rclone\\rclone.conf" not found - using defaults
8262 No remotes found - make a new one
8263 n) New remote
8264 s) Set configuration password
8265 q) Quit config
8266 n/s/q> n
8267
8268 2. Enter the name for the configuration
8269
8270 name> <YOUR NAME>
8271
8272 3. Select “s3” storage.
8273
8274 Choose a number from below, or type in your own value
8275 1 / Alias for a existing remote
8276 \ "alias"
8277 2 / Amazon Drive
8278 \ "amazon cloud drive"
8279 3 / Amazon S3 Complaint Storage Providers (Dreamhost, Ceph, Minio, IBM COS)
8280 \ "s3"
8281 4 / Backblaze B2
8282 \ "b2"
8283 [snip]
8284 23 / http Connection
8285 \ "http"
8286 Storage> 3
8287
8288 4. Select IBM COS as the S3 Storage Provider.
8289
8290 Choose the S3 provider.
8291 Choose a number from below, or type in your own value
8292 1 / Choose this option to configure Storage to AWS S3
8293 \ "AWS"
8294 2 / Choose this option to configure Storage to Ceph Systems
8295 \ "Ceph"
8296 3 / Choose this option to configure Storage to Dreamhost
8297 \ "Dreamhost"
8298 4 / Choose this option to the configure Storage to IBM COS S3
8299 \ "IBMCOS"
8300 5 / Choose this option to the configure Storage to Minio
8301 \ "Minio"
8302 Provider>4
8303
8304 5. Enter the Access Key and Secret.
8305
8306 AWS Access Key ID - leave blank for anonymous access or runtime credentials.
8307 access_key_id> <>
8308 AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
8309 secret_access_key> <>
8310
8311 6. Specify the endpoint for IBM COS. For Public IBM COS, choose from
8312 the option below. For On Premise IBM COS, enter an enpoint address.
8313
8314 Endpoint for IBM COS S3 API.
8315 Specify if using an IBM COS On Premise.
8316 Choose a number from below, or type in your own value
8317 1 / US Cross Region Endpoint
8318 \ "s3-api.us-geo.objectstorage.softlayer.net"
8319 2 / US Cross Region Dallas Endpoint
8320 \ "s3-api.dal.us-geo.objectstorage.softlayer.net"
8321 3 / US Cross Region Washington DC Endpoint
8322 \ "s3-api.wdc-us-geo.objectstorage.softlayer.net"
8323 4 / US Cross Region San Jose Endpoint
8324 \ "s3-api.sjc-us-geo.objectstorage.softlayer.net"
8325 5 / US Cross Region Private Endpoint
8326 \ "s3-api.us-geo.objectstorage.service.networklayer.com"
8327 6 / US Cross Region Dallas Private Endpoint
8328 \ "s3-api.dal-us-geo.objectstorage.service.networklayer.com"
8329 7 / US Cross Region Washington DC Private Endpoint
8330 \ "s3-api.wdc-us-geo.objectstorage.service.networklayer.com"
8331 8 / US Cross Region San Jose Private Endpoint
8332 \ "s3-api.sjc-us-geo.objectstorage.service.networklayer.com"
8333 9 / US Region East Endpoint
8334 \ "s3.us-east.objectstorage.softlayer.net"
8335 10 / US Region East Private Endpoint
8336 \ "s3.us-east.objectstorage.service.networklayer.com"
8337 11 / US Region South Endpoint
8338 [snip]
8339 34 / Toronto Single Site Private Endpoint
8340 \ "s3.tor01.objectstorage.service.networklayer.com"
8341 endpoint>1
8342
8343 7. Specify a IBM COS Location Constraint. The location constraint must
8344 match endpoint when using IBM Cloud Public. For on-prem COS, do not
8345 make a selection from this list, hit enter
8346
8347 1 / US Cross Region Standard
8348 \ "us-standard"
8349 2 / US Cross Region Vault
8350 \ "us-vault"
8351 3 / US Cross Region Cold
8352 \ "us-cold"
8353 4 / US Cross Region Flex
8354 \ "us-flex"
8355 5 / US East Region Standard
8356 \ "us-east-standard"
8357 6 / US East Region Vault
8358 \ "us-east-vault"
8359 7 / US East Region Cold
8360 \ "us-east-cold"
8361 8 / US East Region Flex
8362 \ "us-east-flex"
8363 9 / US South Region Standard
8364 \ "us-south-standard"
8365 10 / US South Region Vault
8366 \ "us-south-vault"
8367 [snip]
8368 32 / Toronto Flex
8369 \ "tor01-flex"
8370 location_constraint>1
8371
8372 9. Specify a canned ACL. IBM Cloud (Strorage) supports “public-read”
8373 and “private”. IBM Cloud(Infra) supports all the canned ACLs.
8374 On-Premise COS supports all the canned ACLs.
8375
8376 Canned ACL used when creating buckets and/or storing objects in S3.
8377 For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
8378 Choose a number from below, or type in your own value
8379 1 / Owner gets FULL_CONTROL. No one else has access rights (default). This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS
8380 \ "private"
8381 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access. This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS
8382 \ "public-read"
8383 3 / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. This acl is available on IBM Cloud (Infra), On-Premise IBM COS
8384 \ "public-read-write"
8385 4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS
8386 \ "authenticated-read"
8387 acl> 1
8388
8389 12. Review the displayed configuration and accept to save the “remote”
8390 then quit. The config file should look like this
8391
8392 [xxx]
8393 type = s3
8394 Provider = IBMCOS
8395 access_key_id = xxx
8396 secret_access_key = yyy
8397 endpoint = s3-api.us-geo.objectstorage.softlayer.net
8398 location_constraint = us-standard
8399 acl = private
8400
8401 13. Execute rclone commands
8402
8403 1) Create a bucket.
8404 rclone mkdir IBM-COS-XREGION:newbucket
8405 2) List available buckets.
8406 rclone lsd IBM-COS-XREGION:
8407 -1 2017-11-08 21:16:22 -1 test
8408 -1 2018-02-14 20:16:39 -1 newbucket
8409 3) List contents of a bucket.
8410 rclone ls IBM-COS-XREGION:newbucket
8411 18685952 test.exe
8412 4) Copy a file from local to remote.
8413 rclone copy /Users/file.txt IBM-COS-XREGION:newbucket
8414 5) Copy a file from remote to local.
8415 rclone copy IBM-COS-XREGION:newbucket/file.txt .
8416 6) Delete a file on remote.
8417 rclone delete IBM-COS-XREGION:newbucket/file.txt
8418
8419 Minio
8420 Minio (https://minio.io/) is an object storage server built for cloud
8421 application developers and devops.
8422
8423 It is very easy to install and provides an S3 compatible server which
8424 can be used by rclone.
8425
8426 To use it, install Minio following the instructions here
8427 (https://docs.minio.io/docs/minio-quickstart-guide).
8428
8429 When it configures itself Minio will print something like this
8430
8431 Endpoint: http://192.168.1.106:9000 http://172.23.0.1:9000
8432 AccessKey: USWUXHGYZQYFYFFIT3RE
8433 SecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
8434 Region: us-east-1
8435 SQS ARNs: arn:minio:sqs:us-east-1:1:redis arn:minio:sqs:us-east-1:2:redis
8436
8437 Browser Access:
8438 http://192.168.1.106:9000 http://172.23.0.1:9000
8439
8440 Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide
8441 $ mc config host add myminio http://192.168.1.106:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
8442
8443 Object API (Amazon S3 compatible):
8444 Go: https://docs.minio.io/docs/golang-client-quickstart-guide
8445 Java: https://docs.minio.io/docs/java-client-quickstart-guide
8446 Python: https://docs.minio.io/docs/python-client-quickstart-guide
8447 JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide
8448 .NET: https://docs.minio.io/docs/dotnet-client-quickstart-guide
8449
8450 Drive Capacity: 26 GiB Free, 165 GiB Total
8451
8452 These details need to go into rclone config like this. Note that it is
8453 important to put the region in as stated above.
8454
8455 env_auth> 1
8456 access_key_id> USWUXHGYZQYFYFFIT3RE
8457 secret_access_key> MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
8458 region> us-east-1
8459 endpoint> http://192.168.1.106:9000
8460 location_constraint>
8461 server_side_encryption>
8462
8463 Which makes the config file look like this
8464
8465 [minio]
8466 type = s3
8467 provider = Minio
8468 env_auth = false
8469 access_key_id = USWUXHGYZQYFYFFIT3RE
8470 secret_access_key = MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
8471 region = us-east-1
8472 endpoint = http://192.168.1.106:9000
8473 location_constraint =
8474 server_side_encryption =
8475
8476 So once set up, for example to copy files into a bucket
8477
8478 rclone copy /path/to/files minio:bucket
8479
8480 Scaleway
8481 Scaleway (https://www.scaleway.com/object-storage/) The Object Storage
8482 platform allows you to store anything from backups, logs and web assets
8483 to documents and photos. Files can be dropped from the Scaleway con‐
8484 sole or transferred through our API and CLI or using any S3-compatible
8485 tool.
8486
8487 Scaleway provides an S3 interface which can be configured for use with
8488 rclone like this:
8489
8490 [scaleway]
8491 type = s3
8492 env_auth = false
8493 endpoint = s3.nl-ams.scw.cloud
8494 access_key_id = SCWXXXXXXXXXXXXXX
8495 secret_access_key = 1111111-2222-3333-44444-55555555555555
8496 region = nl-ams
8497 location_constraint =
8498 acl = private
8499 force_path_style = false
8500 server_side_encryption =
8501 storage_class =
8502
8503 Wasabi
8504 Wasabi (https://wasabi.com) is a cloud-based object storage service for
8505 a broad range of applications and use cases. Wasabi is designed for
8506 individuals and organizations that require a high-performance, reli‐
8507 able, and secure data storage infrastructure at minimal cost.
8508
8509 Wasabi provides an S3 interface which can be configured for use with
8510 rclone like this.
8511
8512 No remotes found - make a new one
8513 n) New remote
8514 s) Set configuration password
8515 n/s> n
8516 name> wasabi
8517 Type of storage to configure.
8518 Choose a number from below, or type in your own value
8519 1 / Amazon Drive
8520 \ "amazon cloud drive"
8521 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
8522 \ "s3"
8523 [snip]
8524 Storage> s3
8525 Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
8526 Choose a number from below, or type in your own value
8527 1 / Enter AWS credentials in the next step
8528 \ "false"
8529 2 / Get AWS credentials from the environment (env vars or IAM)
8530 \ "true"
8531 env_auth> 1
8532 AWS Access Key ID - leave blank for anonymous access or runtime credentials.
8533 access_key_id> YOURACCESSKEY
8534 AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
8535 secret_access_key> YOURSECRETACCESSKEY
8536 Region to connect to.
8537 Choose a number from below, or type in your own value
8538 / The default endpoint - a good choice if you are unsure.
8539 1 | US Region, Northern Virginia or Pacific Northwest.
8540 | Leave location constraint empty.
8541 \ "us-east-1"
8542 [snip]
8543 region> us-east-1
8544 Endpoint for S3 API.
8545 Leave blank if using AWS to use the default endpoint for the region.
8546 Specify if using an S3 clone such as Ceph.
8547 endpoint> s3.wasabisys.com
8548 Location constraint - must be set to match the Region. Used when creating buckets only.
8549 Choose a number from below, or type in your own value
8550 1 / Empty for US Region, Northern Virginia or Pacific Northwest.
8551 \ ""
8552 [snip]
8553 location_constraint>
8554 Canned ACL used when creating buckets and/or storing objects in S3.
8555 For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
8556 Choose a number from below, or type in your own value
8557 1 / Owner gets FULL_CONTROL. No one else has access rights (default).
8558 \ "private"
8559 [snip]
8560 acl>
8561 The server-side encryption algorithm used when storing this object in S3.
8562 Choose a number from below, or type in your own value
8563 1 / None
8564 \ ""
8565 2 / AES256
8566 \ "AES256"
8567 server_side_encryption>
8568 The storage class to use when storing objects in S3.
8569 Choose a number from below, or type in your own value
8570 1 / Default
8571 \ ""
8572 2 / Standard storage class
8573 \ "STANDARD"
8574 3 / Reduced redundancy storage class
8575 \ "REDUCED_REDUNDANCY"
8576 4 / Standard Infrequent Access storage class
8577 \ "STANDARD_IA"
8578 storage_class>
8579 Remote config
8580 --------------------
8581 [wasabi]
8582 env_auth = false
8583 access_key_id = YOURACCESSKEY
8584 secret_access_key = YOURSECRETACCESSKEY
8585 region = us-east-1
8586 endpoint = s3.wasabisys.com
8587 location_constraint =
8588 acl =
8589 server_side_encryption =
8590 storage_class =
8591 --------------------
8592 y) Yes this is OK
8593 e) Edit this remote
8594 d) Delete this remote
8595 y/e/d> y
8596
8597 This will leave the config file looking like this.
8598
8599 [wasabi]
8600 type = s3
8601 provider = Wasabi
8602 env_auth = false
8603 access_key_id = YOURACCESSKEY
8604 secret_access_key = YOURSECRETACCESSKEY
8605 region =
8606 endpoint = s3.wasabisys.com
8607 location_constraint =
8608 acl =
8609 server_side_encryption =
8610 storage_class =
8611
8612 Alibaba OSS
8613 Here is an example of making an Alibaba Cloud (Aliyun) OSS
8614 (https://www.alibabacloud.com/product/oss/) configuration. First run:
8615
8616 rclone config
8617
8618 This will guide you through an interactive setup process.
8619
8620 No remotes found - make a new one
8621 n) New remote
8622 s) Set configuration password
8623 q) Quit config
8624 n/s/q> n
8625 name> oss
8626 Type of storage to configure.
8627 Enter a string value. Press Enter for the default ("").
8628 Choose a number from below, or type in your own value
8629 [snip]
8630 4 / Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc)
8631 \ "s3"
8632 [snip]
8633 Storage> s3
8634 Choose your S3 provider.
8635 Enter a string value. Press Enter for the default ("").
8636 Choose a number from below, or type in your own value
8637 1 / Amazon Web Services (AWS) S3
8638 \ "AWS"
8639 2 / Alibaba Cloud Object Storage System (OSS) formerly Aliyun
8640 \ "Alibaba"
8641 3 / Ceph Object Storage
8642 \ "Ceph"
8643 [snip]
8644 provider> Alibaba
8645 Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
8646 Only applies if access_key_id and secret_access_key is blank.
8647 Enter a boolean value (true or false). Press Enter for the default ("false").
8648 Choose a number from below, or type in your own value
8649 1 / Enter AWS credentials in the next step
8650 \ "false"
8651 2 / Get AWS credentials from the environment (env vars or IAM)
8652 \ "true"
8653 env_auth> 1
8654 AWS Access Key ID.
8655 Leave blank for anonymous access or runtime credentials.
8656 Enter a string value. Press Enter for the default ("").
8657 access_key_id> accesskeyid
8658 AWS Secret Access Key (password)
8659 Leave blank for anonymous access or runtime credentials.
8660 Enter a string value. Press Enter for the default ("").
8661 secret_access_key> secretaccesskey
8662 Endpoint for OSS API.
8663 Enter a string value. Press Enter for the default ("").
8664 Choose a number from below, or type in your own value
8665 1 / East China 1 (Hangzhou)
8666 \ "oss-cn-hangzhou.aliyuncs.com"
8667 2 / East China 2 (Shanghai)
8668 \ "oss-cn-shanghai.aliyuncs.com"
8669 3 / North China 1 (Qingdao)
8670 \ "oss-cn-qingdao.aliyuncs.com"
8671 [snip]
8672 endpoint> 1
8673 Canned ACL used when creating buckets and storing or copying objects.
8674
8675 Note that this ACL is applied when server side copying objects as S3
8676 doesn't copy the ACL from the source but rather writes a fresh one.
8677 Enter a string value. Press Enter for the default ("").
8678 Choose a number from below, or type in your own value
8679 1 / Owner gets FULL_CONTROL. No one else has access rights (default).
8680 \ "private"
8681 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
8682 \ "public-read"
8683 / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
8684 [snip]
8685 acl> 1
8686 The storage class to use when storing new objects in OSS.
8687 Enter a string value. Press Enter for the default ("").
8688 Choose a number from below, or type in your own value
8689 1 / Default
8690 \ ""
8691 2 / Standard storage class
8692 \ "STANDARD"
8693 3 / Archive storage mode.
8694 \ "GLACIER"
8695 4 / Infrequent access storage mode.
8696 \ "STANDARD_IA"
8697 storage_class> 1
8698 Edit advanced config? (y/n)
8699 y) Yes
8700 n) No
8701 y/n> n
8702 Remote config
8703 --------------------
8704 [oss]
8705 type = s3
8706 provider = Alibaba
8707 env_auth = false
8708 access_key_id = accesskeyid
8709 secret_access_key = secretaccesskey
8710 endpoint = oss-cn-hangzhou.aliyuncs.com
8711 acl = private
8712 storage_class = Standard
8713 --------------------
8714 y) Yes this is OK
8715 e) Edit this remote
8716 d) Delete this remote
8717 y/e/d> y
8718
8719 Netease NOS
8720 For Netease NOS configure as per the configurator rclone config setting
8721 the provider Netease. This will automatically set
8722 force_path_style = false which is necessary for it to run properly.
8723
8724 Backblaze B2
8725 B2 is Backblaze's cloud storage system (https://www.backblaze.com/b2/).
8726
8727 Paths are specified as remote:bucket (or remote: for the lsd command.)
8728 You may put subdirectories in too, eg remote:bucket/path/to/dir.
8729
8730 Here is an example of making a b2 configuration. First run
8731
8732 rclone config
8733
8734 This will guide you through an interactive setup process. To authenti‐
8735 cate you will either need your Account ID (a short hex number) and Mas‐
8736 ter Application Key (a long hex number) OR an Application Key, which is
8737 the recommended method. See below for further details on generating
8738 and using an Application Key.
8739
8740 No remotes found - make a new one
8741 n) New remote
8742 q) Quit config
8743 n/q> n
8744 name> remote
8745 Type of storage to configure.
8746 Choose a number from below, or type in your own value
8747 1 / Amazon Drive
8748 \ "amazon cloud drive"
8749 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
8750 \ "s3"
8751 3 / Backblaze B2
8752 \ "b2"
8753 4 / Dropbox
8754 \ "dropbox"
8755 5 / Encrypt/Decrypt a remote
8756 \ "crypt"
8757 6 / Google Cloud Storage (this is not Google Drive)
8758 \ "google cloud storage"
8759 7 / Google Drive
8760 \ "drive"
8761 8 / Hubic
8762 \ "hubic"
8763 9 / Local Disk
8764 \ "local"
8765 10 / Microsoft OneDrive
8766 \ "onedrive"
8767 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
8768 \ "swift"
8769 12 / SSH/SFTP Connection
8770 \ "sftp"
8771 13 / Yandex Disk
8772 \ "yandex"
8773 Storage> 3
8774 Account ID or Application Key ID
8775 account> 123456789abc
8776 Application Key
8777 key> 0123456789abcdef0123456789abcdef0123456789
8778 Endpoint for the service - leave blank normally.
8779 endpoint>
8780 Remote config
8781 --------------------
8782 [remote]
8783 account = 123456789abc
8784 key = 0123456789abcdef0123456789abcdef0123456789
8785 endpoint =
8786 --------------------
8787 y) Yes this is OK
8788 e) Edit this remote
8789 d) Delete this remote
8790 y/e/d> y
8791
8792 This remote is called remote and can now be used like this
8793
8794 See all buckets
8795
8796 rclone lsd remote:
8797
8798 Create a new bucket
8799
8800 rclone mkdir remote:bucket
8801
8802 List the contents of a bucket
8803
8804 rclone ls remote:bucket
8805
8806 Sync /home/local/directory to the remote bucket, deleting any excess
8807 files in the bucket.
8808
8809 rclone sync /home/local/directory remote:bucket
8810
8811 Application Keys
8812 B2 supports multiple Application Keys for different access permission
8813 to B2 Buckets (https://www.backblaze.com/b2/docs/applica‐
8814 tion_keys.html).
8815
8816 You can use these with rclone too; you will need to use rclone version
8817 1.43 or later.
8818
8819 Follow Backblaze's docs to create an Application Key with the required
8820 permission and add the applicationKeyId as the account and the Applica‐
8821 tion Key itself as the key.
8822
8823 Note that you must put the applicationKeyId as the account – you can't
8824 use the master Account ID. If you try then B2 will return 401 errors.
8825
8826 –fast-list
8827 This remote supports --fast-list which allows you to use fewer transac‐
8828 tions in exchange for more memory. See the rclone docs (/docs/#fast-
8829 list) for more details.
8830
8831 Modified time
8832 The modified time is stored as metadata on the object as X-Bz-In‐
8833 fo-src_last_modified_millis as milliseconds since 1970-01-01 in the
8834 Backblaze standard. Other tools should be able to use this as a modi‐
8835 fied time.
8836
8837 Modified times are used in syncing and are fully supported except in
8838 the case of updating a modification time on an existing object. In
8839 this case the object will be uploaded again as B2 doesn't have an API
8840 method to set the modification time independent of doing an upload.
8841
8842 SHA1 checksums
8843 The SHA1 checksums of the files are checked on upload and download and
8844 will be used in the syncing process.
8845
8846 Large files (bigger than the limit in --b2-upload-cutoff) which are up‐
8847 loaded in chunks will store their SHA1 on the object as X-Bz-In‐
8848 fo-large_file_sha1 as recommended by Backblaze.
8849
8850 For a large file to be uploaded with an SHA1 checksum, the source needs
8851 to support SHA1 checksums. The local disk supports SHA1 checksums so
8852 large file transfers from local disk will have an SHA1. See the over‐
8853 view (/overview/#features) for exactly which remotes support SHA1.
8854
8855 Sources which don't support SHA1, in particular crypt will upload large
8856 files without SHA1 checksums. This may be fixed in the future (see
8857 #1767 (https://github.com/ncw/rclone/issues/1767)).
8858
8859 Files sizes below --b2-upload-cutoff will always have an SHA1 regard‐
8860 less of the source.
8861
8862 Transfers
8863 Backblaze recommends that you do lots of transfers simultaneously for
8864 maximum speed. In tests from my SSD equipped laptop the optimum set‐
8865 ting is about --transfers 32 though higher numbers may be used for a
8866 slight speed improvement. The optimum number for you may vary depend‐
8867 ing on your hardware, how big the files are, how much you want to load
8868 your computer, etc. The default of --transfers 4 is definitely too low
8869 for Backblaze B2 though.
8870
8871 Note that uploading big files (bigger than 200 MB by default) will use
8872 a 96 MB RAM buffer by default. There can be at most --transfers of
8873 these in use at any moment, so this sets the upper limit on the memory
8874 used.
8875
8876 Versions
8877 When rclone uploads a new version of a file it creates a new version of
8878 it (https://www.backblaze.com/b2/docs/file_versions.html). Likewise
8879 when you delete a file, the old version will be marked hidden and still
8880 be available. Conversely, you may opt in to a “hard delete” of files
8881 with the --b2-hard-delete flag which would permanently remove the file
8882 instead of hiding it.
8883
8884 Old versions of files, where available, are visible using the --b2-ver‐
8885 sions flag.
8886
8887 If you wish to remove all the old versions then you can use the
8888 rclone cleanup remote:bucket command which will delete all the old ver‐
8889 sions of files, leaving the current ones intact. You can also supply a
8890 path and only old versions under that path will be deleted, eg
8891 rclone cleanup remote:bucket/path/to/stuff.
8892
8893 Note that cleanup will remove partially uploaded files from the bucket
8894 if they are more than a day old.
8895
8896 When you purge a bucket, the current and the old versions will be
8897 deleted then the bucket will be deleted.
8898
8899 However delete will cause the current versions of the files to become
8900 hidden old versions.
8901
8902 Here is a session showing the listing and retrieval of an old version
8903 followed by a cleanup of the old versions.
8904
8905 Show current version and all the versions with --b2-versions flag.
8906
8907 $ rclone -q ls b2:cleanup-test
8908 9 one.txt
8909
8910 $ rclone -q --b2-versions ls b2:cleanup-test
8911 9 one.txt
8912 8 one-v2016-07-04-141032-000.txt
8913 16 one-v2016-07-04-141003-000.txt
8914 15 one-v2016-07-02-155621-000.txt
8915
8916 Retrieve an old version
8917
8918 $ rclone -q --b2-versions copy b2:cleanup-test/one-v2016-07-04-141003-000.txt /tmp
8919
8920 $ ls -l /tmp/one-v2016-07-04-141003-000.txt
8921 -rw-rw-r-- 1 ncw ncw 16 Jul 2 17:46 /tmp/one-v2016-07-04-141003-000.txt
8922
8923 Clean up all the old versions and show that they've gone.
8924
8925 $ rclone -q cleanup b2:cleanup-test
8926
8927 $ rclone -q ls b2:cleanup-test
8928 9 one.txt
8929
8930 $ rclone -q --b2-versions ls b2:cleanup-test
8931 9 one.txt
8932
8933 Data usage
8934 It is useful to know how many requests are sent to the server in dif‐
8935 ferent scenarios.
8936
8937 All copy commands send the following 4 requests:
8938
8939 /b2api/v1/b2_authorize_account
8940 /b2api/v1/b2_create_bucket
8941 /b2api/v1/b2_list_buckets
8942 /b2api/v1/b2_list_file_names
8943
8944 The b2_list_file_names request will be sent once for every 1k files in
8945 the remote path, providing the checksum and modification time of the
8946 listed files. As of version 1.33 issue #818
8947 (https://github.com/ncw/rclone/issues/818) causes extra requests to be
8948 sent when using B2 with Crypt. When a copy operation does not require
8949 any files to be uploaded, no more requests will be sent.
8950
8951 Uploading files that do not require chunking, will send 2 requests per
8952 file upload:
8953
8954 /b2api/v1/b2_get_upload_url
8955 /b2api/v1/b2_upload_file/
8956
8957 Uploading files requiring chunking, will send 2 requests (one each to
8958 start and finish the upload) and another 2 requests for each chunk:
8959
8960 /b2api/v1/b2_start_large_file
8961 /b2api/v1/b2_get_upload_part_url
8962 /b2api/v1/b2_upload_part/
8963 /b2api/v1/b2_finish_large_file
8964
8965 Versions
8966 Versions can be viewed with the --b2-versions flag. When it is set
8967 rclone will show and act on older versions of files. For example
8968
8969 Listing without --b2-versions
8970
8971 $ rclone -q ls b2:cleanup-test
8972 9 one.txt
8973
8974 And with
8975
8976 $ rclone -q --b2-versions ls b2:cleanup-test
8977 9 one.txt
8978 8 one-v2016-07-04-141032-000.txt
8979 16 one-v2016-07-04-141003-000.txt
8980 15 one-v2016-07-02-155621-000.txt
8981
8982 Showing that the current version is unchanged but older versions can be
8983 seen. These have the UTC date that they were uploaded to the server to
8984 the nearest millisecond appended to them.
8985
8986 Note that when using --b2-versions no file write operations are permit‐
8987 ted, so you can't upload files or delete them.
8988
8989 Standard Options
8990 Here are the standard options specific to b2 (Backblaze B2).
8991
8992 –b2-account
8993 Account ID or Application Key ID
8994
8995 · Config: account
8996
8997 · Env Var: RCLONE_B2_ACCOUNT
8998
8999 · Type: string
9000
9001 · Default: ""
9002
9003 –b2-key
9004 Application Key
9005
9006 · Config: key
9007
9008 · Env Var: RCLONE_B2_KEY
9009
9010 · Type: string
9011
9012 · Default: ""
9013
9014 –b2-hard-delete
9015 Permanently delete files on remote removal, otherwise hide files.
9016
9017 · Config: hard_delete
9018
9019 · Env Var: RCLONE_B2_HARD_DELETE
9020
9021 · Type: bool
9022
9023 · Default: false
9024
9025 Advanced Options
9026 Here are the advanced options specific to b2 (Backblaze B2).
9027
9028 –b2-endpoint
9029 Endpoint for the service. Leave blank normally.
9030
9031 · Config: endpoint
9032
9033 · Env Var: RCLONE_B2_ENDPOINT
9034
9035 · Type: string
9036
9037 · Default: ""
9038
9039 –b2-test-mode
9040 A flag string for X-Bz-Test-Mode header for debugging.
9041
9042 This is for debugging purposes only. Setting it to one of the strings
9043 below will cause b2 to return specific errors:
9044
9045 · “fail_some_uploads”
9046
9047 · “expire_some_account_authorization_tokens”
9048
9049 · “force_cap_exceeded”
9050
9051 These will be set in the “X-Bz-Test-Mode” header which is documented in
9052 the b2 integrations checklist (https://www.backblaze.com/b2/docs/inte‐
9053 gration_checklist.html).
9054
9055 · Config: test_mode
9056
9057 · Env Var: RCLONE_B2_TEST_MODE
9058
9059 · Type: string
9060
9061 · Default: ""
9062
9063 –b2-versions
9064 Include old versions in directory listings. Note that when using this
9065 no file write operations are permitted, so you can't upload files or
9066 delete them.
9067
9068 · Config: versions
9069
9070 · Env Var: RCLONE_B2_VERSIONS
9071
9072 · Type: bool
9073
9074 · Default: false
9075
9076 –b2-upload-cutoff
9077 Cutoff for switching to chunked upload.
9078
9079 Files above this size will be uploaded in chunks of “–b2-chunk-size”.
9080
9081 This value should be set no larger than 4.657GiB (== 5GB).
9082
9083 · Config: upload_cutoff
9084
9085 · Env Var: RCLONE_B2_UPLOAD_CUTOFF
9086
9087 · Type: SizeSuffix
9088
9089 · Default: 200M
9090
9091 –b2-chunk-size
9092 Upload chunk size. Must fit in memory.
9093
9094 When uploading large files, chunk the file into this size. Note that
9095 these chunks are buffered in memory and there might a maximum of
9096 “–transfers” chunks in progress at once. 5,000,000 Bytes is the mini‐
9097 mum size.
9098
9099 · Config: chunk_size
9100
9101 · Env Var: RCLONE_B2_CHUNK_SIZE
9102
9103 · Type: SizeSuffix
9104
9105 · Default: 96M
9106
9107 –b2-disable-checksum
9108 Disable checksums for large (> upload cutoff) files
9109
9110 · Config: disable_checksum
9111
9112 · Env Var: RCLONE_B2_DISABLE_CHECKSUM
9113
9114 · Type: bool
9115
9116 · Default: false
9117
9118 –b2-download-url
9119 Custom endpoint for downloads.
9120
9121 This is usually set to a Cloudflare CDN URL as Backblaze offers free
9122 egress for data downloaded through the Cloudflare network. Leave blank
9123 if you want to use the endpoint provided by Backblaze.
9124
9125 · Config: download_url
9126
9127 · Env Var: RCLONE_B2_DOWNLOAD_URL
9128
9129 · Type: string
9130
9131 · Default: ""
9132
9133 Box
9134 Paths are specified as remote:path
9135
9136 Paths may be as deep as required, eg remote:directory/subdirectory.
9137
9138 The initial setup for Box involves getting a token from Box which you
9139 need to do in your browser. rclone config walks you through it.
9140
9141 Here is an example of how to make a remote called remote. First run:
9142
9143 rclone config
9144
9145 This will guide you through an interactive setup process:
9146
9147 No remotes found - make a new one
9148 n) New remote
9149 s) Set configuration password
9150 q) Quit config
9151 n/s/q> n
9152 name> remote
9153 Type of storage to configure.
9154 Choose a number from below, or type in your own value
9155 1 / Amazon Drive
9156 \ "amazon cloud drive"
9157 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
9158 \ "s3"
9159 3 / Backblaze B2
9160 \ "b2"
9161 4 / Box
9162 \ "box"
9163 5 / Dropbox
9164 \ "dropbox"
9165 6 / Encrypt/Decrypt a remote
9166 \ "crypt"
9167 7 / FTP Connection
9168 \ "ftp"
9169 8 / Google Cloud Storage (this is not Google Drive)
9170 \ "google cloud storage"
9171 9 / Google Drive
9172 \ "drive"
9173 10 / Hubic
9174 \ "hubic"
9175 11 / Local Disk
9176 \ "local"
9177 12 / Microsoft OneDrive
9178 \ "onedrive"
9179 13 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
9180 \ "swift"
9181 14 / SSH/SFTP Connection
9182 \ "sftp"
9183 15 / Yandex Disk
9184 \ "yandex"
9185 16 / http Connection
9186 \ "http"
9187 Storage> box
9188 Box App Client Id - leave blank normally.
9189 client_id>
9190 Box App Client Secret - leave blank normally.
9191 client_secret>
9192 Remote config
9193 Use auto config?
9194 * Say Y if not sure
9195 * Say N if you are working on a remote or headless machine
9196 y) Yes
9197 n) No
9198 y/n> y
9199 If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
9200 Log in and authorize rclone for access
9201 Waiting for code...
9202 Got code
9203 --------------------
9204 [remote]
9205 client_id =
9206 client_secret =
9207 token = {"access_token":"XXX","token_type":"bearer","refresh_token":"XXX","expiry":"XXX"}
9208 --------------------
9209 y) Yes this is OK
9210 e) Edit this remote
9211 d) Delete this remote
9212 y/e/d> y
9213
9214 See the remote setup docs (https://rclone.org/remote_setup/) for how to
9215 set it up on a machine with no Internet browser available.
9216
9217 Note that rclone runs a webserver on your local machine to collect the
9218 token as returned from Box. This only runs from the moment it opens
9219 your browser to the moment you get back the verification code. This is
9220 on http://127.0.0.1:53682/ and this it may require you to unblock it
9221 temporarily if you are running a host firewall.
9222
9223 Once configured you can then use rclone like this,
9224
9225 List directories in top level of your Box
9226
9227 rclone lsd remote:
9228
9229 List all the files in your Box
9230
9231 rclone ls remote:
9232
9233 To copy a local directory to an Box directory called backup
9234
9235 rclone copy /home/source remote:backup
9236
9237 Using rclone with an Enterprise account with SSO
9238 If you have an “Enterprise” account type with Box with single sign on
9239 (SSO), you need to create a password to use Box with rclone. This can
9240 be done at your Enterprise Box account by going to Settings, “Account”
9241 Tab, and then set the password in the “Authentication” field.
9242
9243 Once you have done this, you can setup your Enterprise Box account us‐
9244 ing the same procedure detailed above in the, using the password you
9245 have just set.
9246
9247 Invalid refresh token
9248 According to the box docs (https://develop‐
9249 er.box.com/v2.0/docs/oauth-20#section-6-using-the-access-and-refresh-
9250 tokens):
9251
9252 Each refresh_token is valid for one use in 60 days.
9253
9254 This means that if you
9255
9256 · Don't use the box remote for 60 days
9257
9258 · Copy the config file with a box refresh token in and use it in two
9259 places
9260
9261 · Get an error on a token refresh
9262
9263 then rclone will return an error which includes the text Invalid re‐
9264 fresh token.
9265
9266 To fix this you will need to use oauth2 again to update the refresh to‐
9267 ken. You can use the methods in the remote setup docs
9268 (https://rclone.org/remote_setup/), bearing in mind that if you use the
9269 copy the config file method, you should not use that remote on the com‐
9270 puter you did the authentication on.
9271
9272 Here is how to do it.
9273
9274 $ rclone config
9275 Current remotes:
9276
9277 Name Type
9278 ==== ====
9279 remote box
9280
9281 e) Edit existing remote
9282 n) New remote
9283 d) Delete remote
9284 r) Rename remote
9285 c) Copy remote
9286 s) Set configuration password
9287 q) Quit config
9288 e/n/d/r/c/s/q> e
9289 Choose a number from below, or type in an existing value
9290 1 > remote
9291 remote> remote
9292 --------------------
9293 [remote]
9294 type = box
9295 token = {"access_token":"XXX","token_type":"bearer","refresh_token":"XXX","expiry":"2017-07-08T23:40:08.059167677+01:00"}
9296 --------------------
9297 Edit remote
9298 Value "client_id" = ""
9299 Edit? (y/n)>
9300 y) Yes
9301 n) No
9302 y/n> n
9303 Value "client_secret" = ""
9304 Edit? (y/n)>
9305 y) Yes
9306 n) No
9307 y/n> n
9308 Remote config
9309 Already have a token - refresh?
9310 y) Yes
9311 n) No
9312 y/n> y
9313 Use auto config?
9314 * Say Y if not sure
9315 * Say N if you are working on a remote or headless machine
9316 y) Yes
9317 n) No
9318 y/n> y
9319 If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
9320 Log in and authorize rclone for access
9321 Waiting for code...
9322 Got code
9323 --------------------
9324 [remote]
9325 type = box
9326 token = {"access_token":"YYY","token_type":"bearer","refresh_token":"YYY","expiry":"2017-07-23T12:22:29.259137901+01:00"}
9327 --------------------
9328 y) Yes this is OK
9329 e) Edit this remote
9330 d) Delete this remote
9331 y/e/d> y
9332
9333 Modified time and hashes
9334 Box allows modification times to be set on objects accurate to 1 sec‐
9335 ond. These will be used to detect whether objects need syncing or not.
9336
9337 Box supports SHA1 type hashes, so you can use the --checksum flag.
9338
9339 Transfers
9340 For files above 50MB rclone will use a chunked transfer. Rclone will
9341 upload up to --transfers chunks at the same time (shared among all the
9342 multipart uploads). Chunks are buffered in memory and are normally 8MB
9343 so increasing --transfers will increase memory use.
9344
9345 Deleting files
9346 Depending on the enterprise settings for your user, the item will ei‐
9347 ther be actually deleted from Box or moved to the trash.
9348
9349 Standard Options
9350 Here are the standard options specific to box (Box).
9351
9352 –box-client-id
9353 Box App Client Id. Leave blank normally.
9354
9355 · Config: client_id
9356
9357 · Env Var: RCLONE_BOX_CLIENT_ID
9358
9359 · Type: string
9360
9361 · Default: ""
9362
9363 –box-client-secret
9364 Box App Client Secret Leave blank normally.
9365
9366 · Config: client_secret
9367
9368 · Env Var: RCLONE_BOX_CLIENT_SECRET
9369
9370 · Type: string
9371
9372 · Default: ""
9373
9374 Advanced Options
9375 Here are the advanced options specific to box (Box).
9376
9377 –box-upload-cutoff
9378 Cutoff for switching to multipart upload (>= 50MB).
9379
9380 · Config: upload_cutoff
9381
9382 · Env Var: RCLONE_BOX_UPLOAD_CUTOFF
9383
9384 · Type: SizeSuffix
9385
9386 · Default: 50M
9387
9388 –box-commit-retries
9389 Max number of times to try committing a multipart file.
9390
9391 · Config: commit_retries
9392
9393 · Env Var: RCLONE_BOX_COMMIT_RETRIES
9394
9395 · Type: int
9396
9397 · Default: 100
9398
9399 Limitations
9400 Note that Box is case insensitive so you can't have a file called “Hel‐
9401 lo.doc” and one called “hello.doc”.
9402
9403 Box file names can't have the \ character in. rclone maps this to and
9404 from an identical looking unicode equivalent \.
9405
9406 Box only supports filenames up to 255 characters in length.
9407
9408 Cache (BETA)
9409 The cache remote wraps another existing remote and stores file struc‐
9410 ture and its data for long running tasks like rclone mount.
9411
9412 To get started you just need to have an existing remote which can be
9413 configured with cache.
9414
9415 Here is an example of how to make a remote called test-cache. First
9416 run:
9417
9418 rclone config
9419
9420 This will guide you through an interactive setup process:
9421
9422 No remotes found - make a new one
9423 n) New remote
9424 r) Rename remote
9425 c) Copy remote
9426 s) Set configuration password
9427 q) Quit config
9428 n/r/c/s/q> n
9429 name> test-cache
9430 Type of storage to configure.
9431 Choose a number from below, or type in your own value
9432 ...
9433 5 / Cache a remote
9434 \ "cache"
9435 ...
9436 Storage> 5
9437 Remote to cache.
9438 Normally should contain a ':' and a path, eg "myremote:path/to/dir",
9439 "myremote:bucket" or maybe "myremote:" (not recommended).
9440 remote> local:/test
9441 Optional: The URL of the Plex server
9442 plex_url> http://127.0.0.1:32400
9443 Optional: The username of the Plex user
9444 plex_username> dummyusername
9445 Optional: The password of the Plex user
9446 y) Yes type in my own password
9447 g) Generate random password
9448 n) No leave this optional password blank
9449 y/g/n> y
9450 Enter the password:
9451 password:
9452 Confirm the password:
9453 password:
9454 The size of a chunk. Lower value good for slow connections but can affect seamless reading.
9455 Default: 5M
9456 Choose a number from below, or type in your own value
9457 1 / 1MB
9458 \ "1m"
9459 2 / 5 MB
9460 \ "5M"
9461 3 / 10 MB
9462 \ "10M"
9463 chunk_size> 2
9464 How much time should object info (file size, file hashes etc) be stored in cache. Use a very high value if you don't plan on changing the source FS from outside the cache.
9465 Accepted units are: "s", "m", "h".
9466 Default: 5m
9467 Choose a number from below, or type in your own value
9468 1 / 1 hour
9469 \ "1h"
9470 2 / 24 hours
9471 \ "24h"
9472 3 / 24 hours
9473 \ "48h"
9474 info_age> 2
9475 The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted.
9476 Default: 10G
9477 Choose a number from below, or type in your own value
9478 1 / 500 MB
9479 \ "500M"
9480 2 / 1 GB
9481 \ "1G"
9482 3 / 10 GB
9483 \ "10G"
9484 chunk_total_size> 3
9485 Remote config
9486 --------------------
9487 [test-cache]
9488 remote = local:/test
9489 plex_url = http://127.0.0.1:32400
9490 plex_username = dummyusername
9491 plex_password = *** ENCRYPTED ***
9492 chunk_size = 5M
9493 info_age = 48h
9494 chunk_total_size = 10G
9495
9496 You can then use it like this,
9497
9498 List directories in top level of your drive
9499
9500 rclone lsd test-cache:
9501
9502 List all the files in your drive
9503
9504 rclone ls test-cache:
9505
9506 To start a cached mount
9507
9508 rclone mount --allow-other test-cache: /var/tmp/test-cache
9509
9510 Write Features
9511 Offline uploading
9512 In an effort to make writing through cache more reliable, the backend
9513 now supports this feature which can be activated by specifying a
9514 cache-tmp-upload-path.
9515
9516 A files goes through these states when using this feature:
9517
9518 1. An upload is started (usually by copying a file on the cache remote)
9519
9520 2. When the copy to the temporary location is complete the file is part
9521 of the cached remote and looks and behaves like any other file
9522 (reading included)
9523
9524 3. After cache-tmp-wait-time passes and the file is next in line,
9525 rclone move is used to move the file to the cloud provider
9526
9527 4. Reading the file still works during the upload but most modifica‐
9528 tions on it will be prohibited
9529
9530 5. Once the move is complete the file is unlocked for modifications as
9531 it becomes as any other regular file
9532
9533 6. If the file is being read through cache when it's actually deleted
9534 from the temporary path then cache will simply swap the source to
9535 the cloud provider without interrupting the reading (small blip can
9536 happen though)
9537
9538 Files are uploaded in sequence and only one file is uploaded at a time.
9539 Uploads will be stored in a queue and be processed based on the order
9540 they were added. The queue and the temporary storage is persistent
9541 across restarts but can be cleared on startup with the --cache-db-purge
9542 flag.
9543
9544 Write Support
9545 Writes are supported through cache. One caveat is that a mounted cache
9546 remote does not add any retry or fallback mechanism to the upload oper‐
9547 ation. This will depend on the implementation of the wrapped remote.
9548 Consider using Offline uploading for reliable writes.
9549
9550 One special case is covered with cache-writes which will cache the file
9551 data at the same time as the upload when it is enabled making it avail‐
9552 able from the cache store immediately once the upload is finished.
9553
9554 Read Features
9555 Multiple connections
9556 To counter the high latency between a local PC where rclone is running
9557 and cloud providers, the cache remote can split multiple requests to
9558 the cloud provider for smaller file chunks and combines them together
9559 locally where they can be available almost immediately before the read‐
9560 er usually needs them.
9561
9562 This is similar to buffering when media files are played online.
9563 Rclone will stay around the current marker but always try its best to
9564 stay ahead and prepare the data before.
9565
9566 Plex Integration
9567 There is a direct integration with Plex which allows cache to detect
9568 during reading if the file is in playback or not. This helps cache to
9569 adapt how it queries the cloud provider depending on what is needed
9570 for.
9571
9572 Scans will have a minimum amount of workers (1) while in a confirmed
9573 playback cache will deploy the configured number of workers.
9574
9575 This integration opens the doorway to additional performance improve‐
9576 ments which will be explored in the near future.
9577
9578 Note: If Plex options are not configured, cache will function with its
9579 configured options without adapting any of its settings.
9580
9581 How to enable? Run rclone config and add all the Plex options (end‐
9582 point, username and password) in your remote and it will be automati‐
9583 cally enabled.
9584
9585 Affected settings: - cache-workers: Configured value during confirmed
9586 playback or 1 all the other times
9587
9588 Certificate Validation
9589 When the Plex server is configured to only accept secure connections,
9590 it is possible to use .plex.direct URL's to ensure certificate valida‐
9591 tion succeeds. These URL's are used by Plex internally to connect to
9592 the Plex server securely.
9593
9594 The format for this URL's is the following:
9595
9596 https://ip-with-dots-replaced.server-hash.plex.direct:32400/
9597
9598 The ip-with-dots-replaced part can be any IPv4 address, where the dots
9599 have been replaced with dashes, e.g. 127.0.0.1 becomes 127-0-0-1.
9600
9601 To get the server-hash part, the easiest way is to visit
9602
9603 https://plex.tv/api/resources?includeHttps=1&X-Plex-Token=your-plex-to‐
9604 ken
9605
9606 This page will list all the available Plex servers for your account
9607 with at least one .plex.direct link for each. Copy one URL and replace
9608 the IP address with the desired address. This can be used as the
9609 plex_url value.
9610
9611 Known issues
9612 Mount and –dir-cache-time
9613 –dir-cache-time controls the first layer of directory caching which
9614 works at the mount layer. Being an independent caching mechanism from
9615 the cache backend, it will manage its own entries based on the config‐
9616 ured time.
9617
9618 To avoid getting in a scenario where dir cache has obsolete data and
9619 cache would have the correct one, try to set --dir-cache-time to a low‐
9620 er time than --cache-info-age. Default values are already configured
9621 in this way.
9622
9623 Windows support - Experimental
9624 There are a couple of issues with Windows mount functionality that
9625 still require some investigations. It should be considered as experi‐
9626 mental thus far as fixes come in for this OS.
9627
9628 Most of the issues seem to be related to the difference between
9629 filesystems on Linux flavors and Windows as cache is heavily dependant
9630 on them.
9631
9632 Any reports or feedback on how cache behaves on this OS is greatly ap‐
9633 preciated.
9634
9635 · https://github.com/ncw/rclone/issues/1935
9636
9637 · https://github.com/ncw/rclone/issues/1907
9638
9639 · https://github.com/ncw/rclone/issues/1834
9640
9641 Risk of throttling
9642 Future iterations of the cache backend will make use of the pooling
9643 functionality of the cloud provider to synchronize and at the same time
9644 make writing through it more tolerant to failures.
9645
9646 There are a couple of enhancements in track to add these but in the
9647 meantime there is a valid concern that the expiring cache listings can
9648 lead to cloud provider throttles or bans due to repeated queries on it
9649 for very large mounts.
9650
9651 Some recommendations: - don't use a very small interval for entry in‐
9652 formations (--cache-info-age) - while writes aren't yet optimised, you
9653 can still write through cache which gives you the advantage of adding
9654 the file in the cache at the same time if configured to do so.
9655
9656 Future enhancements:
9657
9658 · https://github.com/ncw/rclone/issues/1937
9659
9660 · https://github.com/ncw/rclone/issues/1936
9661
9662 cache and crypt
9663 One common scenario is to keep your data encrypted in the cloud
9664 provider using the crypt remote. crypt uses a similar technique to
9665 wrap around an existing remote and handles this translation in a seam‐
9666 less way.
9667
9668 There is an issue with wrapping the remotes in this order: cloud remote
9669 -> crypt -> cache
9670
9671 During testing, I experienced a lot of bans with the remotes in this
9672 order. I suspect it might be related to how crypt opens files on the
9673 cloud provider which makes it think we're downloading the full file in‐
9674 stead of small chunks. Organizing the remotes in this order yields
9675 better results: cloud remote -> cache -> crypt
9676
9677 absolute remote paths
9678 cache can not differentiate between relative and absolute paths for the
9679 wrapped remote. Any path given in the remote config setting and on the
9680 command line will be passed to the wrapped remote as is, but for stor‐
9681 ing the chunks on disk the path will be made relative by removing any
9682 leading / character.
9683
9684 This behavior is irrelevant for most backend types, but there are back‐
9685 ends where a leading / changes the effective directory, e.g. in the
9686 sftp backend paths starting with a / are relative to the root of the
9687 SSH server and paths without are relative to the user home directory.
9688 As a result sftp:bin and sftp:/bin will share the same cache folder,
9689 even if they represent a different directory on the SSH server.
9690
9691 Cache and Remote Control (–rc)
9692 Cache supports the new --rc mode in rclone and can be remote controlled
9693 through the following end points: By default, the listener is disabled
9694 if you do not add the flag.
9695
9696 rc cache/expire
9697 Purge a remote from the cache backend. Supports either a directory or
9698 a file. It supports both encrypted and unencrypted file names if cache
9699 is wrapped by crypt.
9700
9701 Params: - remote = path to remote (required) - withData = true/false to
9702 delete cached data (chunks) as well (optional, false by default)
9703
9704 Standard Options
9705 Here are the standard options specific to cache (Cache a remote).
9706
9707 –cache-remote
9708 Remote to cache. Normally should contain a `:' and a path, eg “myre‐
9709 mote:path/to/dir”, “myremote:bucket” or maybe “myremote:” (not recom‐
9710 mended).
9711
9712 · Config: remote
9713
9714 · Env Var: RCLONE_CACHE_REMOTE
9715
9716 · Type: string
9717
9718 · Default: ""
9719
9720 –cache-plex-url
9721 The URL of the Plex server
9722
9723 · Config: plex_url
9724
9725 · Env Var: RCLONE_CACHE_PLEX_URL
9726
9727 · Type: string
9728
9729 · Default: ""
9730
9731 –cache-plex-username
9732 The username of the Plex user
9733
9734 · Config: plex_username
9735
9736 · Env Var: RCLONE_CACHE_PLEX_USERNAME
9737
9738 · Type: string
9739
9740 · Default: ""
9741
9742 –cache-plex-password
9743 The password of the Plex user
9744
9745 · Config: plex_password
9746
9747 · Env Var: RCLONE_CACHE_PLEX_PASSWORD
9748
9749 · Type: string
9750
9751 · Default: ""
9752
9753 –cache-chunk-size
9754 The size of a chunk (partial file data).
9755
9756 Use lower numbers for slower connections. If the chunk size is
9757 changed, any downloaded chunks will be invalid and cache-chunk-path
9758 will need to be cleared or unexpected EOF errors will occur.
9759
9760 · Config: chunk_size
9761
9762 · Env Var: RCLONE_CACHE_CHUNK_SIZE
9763
9764 · Type: SizeSuffix
9765
9766 · Default: 5M
9767
9768 · Examples:
9769
9770 · “1m”
9771
9772 · 1MB
9773
9774 · “5M”
9775
9776 · 5 MB
9777
9778 · “10M”
9779
9780 · 10 MB
9781
9782 –cache-info-age
9783 How long to cache file structure information (directory listings, file
9784 size, times etc). If all write operations are done through the cache
9785 then you can safely make this value very large as the cache store will
9786 also be updated in real time.
9787
9788 · Config: info_age
9789
9790 · Env Var: RCLONE_CACHE_INFO_AGE
9791
9792 · Type: Duration
9793
9794 · Default: 6h0m0s
9795
9796 · Examples:
9797
9798 · “1h”
9799
9800 · 1 hour
9801
9802 · “24h”
9803
9804 · 24 hours
9805
9806 · “48h”
9807
9808 · 48 hours
9809
9810 –cache-chunk-total-size
9811 The total size that the chunks can take up on the local disk.
9812
9813 If the cache exceeds this value then it will start to delete the oldest
9814 chunks until it goes under this value.
9815
9816 · Config: chunk_total_size
9817
9818 · Env Var: RCLONE_CACHE_CHUNK_TOTAL_SIZE
9819
9820 · Type: SizeSuffix
9821
9822 · Default: 10G
9823
9824 · Examples:
9825
9826 · “500M”
9827
9828 · 500 MB
9829
9830 · “1G”
9831
9832 · 1 GB
9833
9834 · “10G”
9835
9836 · 10 GB
9837
9838 Advanced Options
9839 Here are the advanced options specific to cache (Cache a remote).
9840
9841 –cache-plex-token
9842 The plex token for authentication - auto set normally
9843
9844 · Config: plex_token
9845
9846 · Env Var: RCLONE_CACHE_PLEX_TOKEN
9847
9848 · Type: string
9849
9850 · Default: ""
9851
9852 –cache-plex-insecure
9853 Skip all certificate verifications when connecting to the Plex server
9854
9855 · Config: plex_insecure
9856
9857 · Env Var: RCLONE_CACHE_PLEX_INSECURE
9858
9859 · Type: string
9860
9861 · Default: ""
9862
9863 –cache-db-path
9864 Directory to store file structure metadata DB. The remote name is used
9865 as the DB file name.
9866
9867 · Config: db_path
9868
9869 · Env Var: RCLONE_CACHE_DB_PATH
9870
9871 · Type: string
9872
9873 · Default: “/home/ncw/.cache/rclone/cache-backend”
9874
9875 –cache-chunk-path
9876 Directory to cache chunk files.
9877
9878 Path to where partial file data (chunks) are stored locally. The re‐
9879 mote name is appended to the final path.
9880
9881 This config follows the “–cache-db-path”. If you specify a custom lo‐
9882 cation for “–cache-db-path” and don't specify one for
9883 “–cache-chunk-path” then “–cache-chunk-path” will use the same path as
9884 “–cache-db-path”.
9885
9886 · Config: chunk_path
9887
9888 · Env Var: RCLONE_CACHE_CHUNK_PATH
9889
9890 · Type: string
9891
9892 · Default: “/home/ncw/.cache/rclone/cache-backend”
9893
9894 –cache-db-purge
9895 Clear all the cached data for this remote on start.
9896
9897 · Config: db_purge
9898
9899 · Env Var: RCLONE_CACHE_DB_PURGE
9900
9901 · Type: bool
9902
9903 · Default: false
9904
9905 –cache-chunk-clean-interval
9906 How often should the cache perform cleanups of the chunk storage. The
9907 default value should be ok for most people. If you find that the cache
9908 goes over “cache-chunk-total-size” too often then try to lower this
9909 value to force it to perform cleanups more often.
9910
9911 · Config: chunk_clean_interval
9912
9913 · Env Var: RCLONE_CACHE_CHUNK_CLEAN_INTERVAL
9914
9915 · Type: Duration
9916
9917 · Default: 1m0s
9918
9919 –cache-read-retries
9920 How many times to retry a read from a cache storage.
9921
9922 Since reading from a cache stream is independent from downloading file
9923 data, readers can get to a point where there's no more data in the
9924 cache. Most of the times this can indicate a connectivity issue if
9925 cache isn't able to provide file data anymore.
9926
9927 For really slow connections, increase this to a point where the stream
9928 is able to provide data but your experience will be very stuttering.
9929
9930 · Config: read_retries
9931
9932 · Env Var: RCLONE_CACHE_READ_RETRIES
9933
9934 · Type: int
9935
9936 · Default: 10
9937
9938 –cache-workers
9939 How many workers should run in parallel to download chunks.
9940
9941 Higher values will mean more parallel processing (better CPU needed)
9942 and more concurrent requests on the cloud provider. This impacts sev‐
9943 eral aspects like the cloud provider API limits, more stress on the
9944 hardware that rclone runs on but it also means that streams will be
9945 more fluid and data will be available much more faster to readers.
9946
9947 Note: If the optional Plex integration is enabled then this setting
9948 will adapt to the type of reading performed and the value specified
9949 here will be used as a maximum number of workers to use.
9950
9951 · Config: workers
9952
9953 · Env Var: RCLONE_CACHE_WORKERS
9954
9955 · Type: int
9956
9957 · Default: 4
9958
9959 –cache-chunk-no-memory
9960 Disable the in-memory cache for storing chunks during streaming.
9961
9962 By default, cache will keep file data during streaming in RAM as well
9963 to provide it to readers as fast as possible.
9964
9965 This transient data is evicted as soon as it is read and the number of
9966 chunks stored doesn't exceed the number of workers. However, depending
9967 on other settings like “cache-chunk-size” and “cache-workers” this
9968 footprint can increase if there are parallel streams too (multiple
9969 files being read at the same time).
9970
9971 If the hardware permits it, use this feature to provide an overall bet‐
9972 ter performance during streaming but it can also be disabled if RAM is
9973 not available on the local machine.
9974
9975 · Config: chunk_no_memory
9976
9977 · Env Var: RCLONE_CACHE_CHUNK_NO_MEMORY
9978
9979 · Type: bool
9980
9981 · Default: false
9982
9983 –cache-rps
9984 Limits the number of requests per second to the source FS (-1 to dis‐
9985 able)
9986
9987 This setting places a hard limit on the number of requests per second
9988 that cache will be doing to the cloud provider remote and try to re‐
9989 spect that value by setting waits between reads.
9990
9991 If you find that you're getting banned or limited on the cloud provider
9992 through cache and know that a smaller number of requests per second
9993 will allow you to work with it then you can use this setting for that.
9994
9995 A good balance of all the other settings should make this setting use‐
9996 less but it is available to set for more special cases.
9997
9998 NOTE: This will limit the number of requests during streams but other
9999 API calls to the cloud provider like directory listings will still
10000 pass.
10001
10002 · Config: rps
10003
10004 · Env Var: RCLONE_CACHE_RPS
10005
10006 · Type: int
10007
10008 · Default: -1
10009
10010 –cache-writes
10011 Cache file data on writes through the FS
10012
10013 If you need to read files immediately after you upload them through
10014 cache you can enable this flag to have their data stored in the cache
10015 store at the same time during upload.
10016
10017 · Config: writes
10018
10019 · Env Var: RCLONE_CACHE_WRITES
10020
10021 · Type: bool
10022
10023 · Default: false
10024
10025 –cache-tmp-upload-path
10026 Directory to keep temporary files until they are uploaded.
10027
10028 This is the path where cache will use as a temporary storage for new
10029 files that need to be uploaded to the cloud provider.
10030
10031 Specifying a value will enable this feature. Without it, it is com‐
10032 pletely disabled and files will be uploaded directly to the cloud
10033 provider
10034
10035 · Config: tmp_upload_path
10036
10037 · Env Var: RCLONE_CACHE_TMP_UPLOAD_PATH
10038
10039 · Type: string
10040
10041 · Default: ""
10042
10043 –cache-tmp-wait-time
10044 How long should files be stored in local cache before being uploaded
10045
10046 This is the duration that a file must wait in the temporary location
10047 cache-tmp-upload-path before it is selected for upload.
10048
10049 Note that only one file is uploaded at a time and it can take longer to
10050 start the upload if a queue formed for this purpose.
10051
10052 · Config: tmp_wait_time
10053
10054 · Env Var: RCLONE_CACHE_TMP_WAIT_TIME
10055
10056 · Type: Duration
10057
10058 · Default: 15s
10059
10060 –cache-db-wait-time
10061 How long to wait for the DB to be available - 0 is unlimited
10062
10063 Only one process can have the DB open at any one time, so rclone waits
10064 for this duration for the DB to become available before it gives an er‐
10065 ror.
10066
10067 If you set it to 0 then it will wait forever.
10068
10069 · Config: db_wait_time
10070
10071 · Env Var: RCLONE_CACHE_DB_WAIT_TIME
10072
10073 · Type: Duration
10074
10075 · Default: 1s
10076
10077 Crypt
10078 The crypt remote encrypts and decrypts another remote.
10079
10080 To use it first set up the underlying remote following the config in‐
10081 structions for that remote. You can also use a local pathname instead
10082 of a remote which will encrypt and decrypt from that directory which
10083 might be useful for encrypting onto a USB stick for example.
10084
10085 First check your chosen remote is working - we'll call it remote:path
10086 in these docs. Note that anything inside remote:path will be encrypted
10087 and anything outside won't. This means that if you are using a bucket
10088 based remote (eg S3, B2, swift) then you should probably put the bucket
10089 in the remote s3:bucket. If you just use s3: then rclone will make en‐
10090 crypted bucket names too (if using file name encryption) which may or
10091 may not be what you want.
10092
10093 Now configure crypt using rclone config. We will call this one secret
10094 to differentiate it from the remote.
10095
10096 No remotes found - make a new one
10097 n) New remote
10098 s) Set configuration password
10099 q) Quit config
10100 n/s/q> n
10101 name> secret
10102 Type of storage to configure.
10103 Choose a number from below, or type in your own value
10104 1 / Amazon Drive
10105 \ "amazon cloud drive"
10106 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
10107 \ "s3"
10108 3 / Backblaze B2
10109 \ "b2"
10110 4 / Dropbox
10111 \ "dropbox"
10112 5 / Encrypt/Decrypt a remote
10113 \ "crypt"
10114 6 / Google Cloud Storage (this is not Google Drive)
10115 \ "google cloud storage"
10116 7 / Google Drive
10117 \ "drive"
10118 8 / Hubic
10119 \ "hubic"
10120 9 / Local Disk
10121 \ "local"
10122 10 / Microsoft OneDrive
10123 \ "onedrive"
10124 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
10125 \ "swift"
10126 12 / SSH/SFTP Connection
10127 \ "sftp"
10128 13 / Yandex Disk
10129 \ "yandex"
10130 Storage> 5
10131 Remote to encrypt/decrypt.
10132 Normally should contain a ':' and a path, eg "myremote:path/to/dir",
10133 "myremote:bucket" or maybe "myremote:" (not recommended).
10134 remote> remote:path
10135 How to encrypt the filenames.
10136 Choose a number from below, or type in your own value
10137 1 / Don't encrypt the file names. Adds a ".bin" extension only.
10138 \ "off"
10139 2 / Encrypt the filenames see the docs for the details.
10140 \ "standard"
10141 3 / Very simple filename obfuscation.
10142 \ "obfuscate"
10143 filename_encryption> 2
10144 Option to either encrypt directory names or leave them intact.
10145 Choose a number from below, or type in your own value
10146 1 / Encrypt directory names.
10147 \ "true"
10148 2 / Don't encrypt directory names, leave them intact.
10149 \ "false"
10150 filename_encryption> 1
10151 Password or pass phrase for encryption.
10152 y) Yes type in my own password
10153 g) Generate random password
10154 y/g> y
10155 Enter the password:
10156 password:
10157 Confirm the password:
10158 password:
10159 Password or pass phrase for salt. Optional but recommended.
10160 Should be different to the previous password.
10161 y) Yes type in my own password
10162 g) Generate random password
10163 n) No leave this optional password blank
10164 y/g/n> g
10165 Password strength in bits.
10166 64 is just about memorable
10167 128 is secure
10168 1024 is the maximum
10169 Bits> 128
10170 Your password is: JAsJvRcgR-_veXNfy_sGmQ
10171 Use this password?
10172 y) Yes
10173 n) No
10174 y/n> y
10175 Remote config
10176 --------------------
10177 [secret]
10178 remote = remote:path
10179 filename_encryption = standard
10180 password = *** ENCRYPTED ***
10181 password2 = *** ENCRYPTED ***
10182 --------------------
10183 y) Yes this is OK
10184 e) Edit this remote
10185 d) Delete this remote
10186 y/e/d> y
10187
10188 Important The password is stored in the config file is lightly obscured
10189 so it isn't immediately obvious what it is. It is in no way secure un‐
10190 less you use config file encryption.
10191
10192 A long passphrase is recommended, or you can use a random one. Note
10193 that if you reconfigure rclone with the same passwords/passphrases
10194 elsewhere it will be compatible - all the secrets used are derived from
10195 those two passwords/passphrases.
10196
10197 Note that rclone does not encrypt
10198
10199 · file length - this can be calcuated within 16 bytes
10200
10201 · modification time - used for syncing
10202
10203 Specifying the remote
10204 In normal use, make sure the remote has a : in. If you specify the re‐
10205 mote without a : then rclone will use a local directory of that name.
10206 So if you use a remote of /path/to/secret/files then rclone will en‐
10207 crypt stuff to that directory. If you use a remote of name then rclone
10208 will put files in a directory called name in the current directory.
10209
10210 If you specify the remote as remote:path/to/dir then rclone will store
10211 encrypted files in path/to/dir on the remote. If you are using file
10212 name encryption, then when you save files to secret:subdir/subfile this
10213 will store them in the unencrypted path path/to/dir but the subdir/sub‐
10214 path bit will be encrypted.
10215
10216 Note that unless you want encrypted bucket names (which are difficult
10217 to manage because you won't know what directory they represent in web
10218 interfaces etc), you should probably specify a bucket, eg remote:se‐
10219 cretbucket when using bucket based remotes such as S3, Swift, Hubic,
10220 B2, GCS.
10221
10222 Example
10223 To test I made a little directory of files using “standard” file name
10224 encryption.
10225
10226 plaintext/
10227 ├── file0.txt
10228 ├── file1.txt
10229 └── subdir
10230 ├── file2.txt
10231 ├── file3.txt
10232 └── subsubdir
10233 └── file4.txt
10234
10235 Copy these to the remote and list them back
10236
10237 $ rclone -q copy plaintext secret:
10238 $ rclone -q ls secret:
10239 7 file1.txt
10240 6 file0.txt
10241 8 subdir/file2.txt
10242 10 subdir/subsubdir/file4.txt
10243 9 subdir/file3.txt
10244
10245 Now see what that looked like when encrypted
10246
10247 $ rclone -q ls remote:path
10248 55 hagjclgavj2mbiqm6u6cnjjqcg
10249 54 v05749mltvv1tf4onltun46gls
10250 57 86vhrsv86mpbtd3a0akjuqslj8/dlj7fkq4kdq72emafg7a7s41uo
10251 58 86vhrsv86mpbtd3a0akjuqslj8/7uu829995du6o42n32otfhjqp4/b9pausrfansjth5ob3jkdqd4lc
10252 56 86vhrsv86mpbtd3a0akjuqslj8/8njh1sk437gttmep3p70g81aps
10253
10254 Note that this retains the directory structure which means you can do
10255 this
10256
10257 $ rclone -q ls secret:subdir
10258 8 file2.txt
10259 9 file3.txt
10260 10 subsubdir/file4.txt
10261
10262 If don't use file name encryption then the remote will look like this -
10263 note the .bin extensions added to prevent the cloud provider attempting
10264 to interpret the data.
10265
10266 $ rclone -q ls remote:path
10267 54 file0.txt.bin
10268 57 subdir/file3.txt.bin
10269 56 subdir/file2.txt.bin
10270 58 subdir/subsubdir/file4.txt.bin
10271 55 file1.txt.bin
10272
10273 File name encryption modes
10274 Here are some of the features of the file name encryption modes
10275
10276 Off
10277
10278 · doesn't hide file names or directory structure
10279
10280 · allows for longer file names (~246 characters)
10281
10282 · can use sub paths and copy single files
10283
10284 Standard
10285
10286 · file names encrypted
10287
10288 · file names can't be as long (~143 characters)
10289
10290 · can use sub paths and copy single files
10291
10292 · directory structure visible
10293
10294 · identical files names will have identical uploaded names
10295
10296 · can use shortcuts to shorten the directory recursion
10297
10298 Obfuscation
10299
10300 This is a simple “rotate” of the filename, with each file having a rot
10301 distance based on the filename. We store the distance at the beginning
10302 of the filename. So a file called “hello” may become “53.jgnnq”
10303
10304 This is not a strong encryption of filenames, but it may stop automated
10305 scanning tools from picking up on filename patterns. As such it's an
10306 intermediate between “off” and “standard”. The advantage is that it
10307 allows for longer path segment names.
10308
10309 There is a possibility with some unicode based filenames that the ob‐
10310 fuscation is weak and may map lower case characters to upper case
10311 equivalents. You can not rely on this for strong protection.
10312
10313 · file names very lightly obfuscated
10314
10315 · file names can be longer than standard encryption
10316
10317 · can use sub paths and copy single files
10318
10319 · directory structure visible
10320
10321 · identical files names will have identical uploaded names
10322
10323 Cloud storage systems have various limits on file name length and total
10324 path length which you are more likely to hit using “Standard” file name
10325 encryption. If you keep your file names to below 156 characters in
10326 length then you should be OK on all providers.
10327
10328 There may be an even more secure file name encryption mode in the fu‐
10329 ture which will address the long file name problem.
10330
10331 Directory name encryption
10332 Crypt offers the option of encrypting dir names or leaving them intact.
10333 There are two options:
10334
10335 True
10336
10337 Encrypts the whole file path including directory names Example:
10338 1/12/123.txt is encrypted to
10339 p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0
10340
10341 False
10342
10343 Only encrypts file names, skips directory names Example: 1/12/123.txt
10344 is encrypted to 1/12/qgm4avr35m5loi1th53ato71v0
10345
10346 Modified time and hashes
10347 Crypt stores modification times using the underlying remote so support
10348 depends on that.
10349
10350 Hashes are not stored for crypt. However the data integrity is pro‐
10351 tected by an extremely strong crypto authenticator.
10352
10353 Note that you should use the rclone cryptcheck command to check the in‐
10354 tegrity of a crypted remote instead of rclone check which can't check
10355 the checksums properly.
10356
10357 Standard Options
10358 Here are the standard options specific to crypt (Encrypt/Decrypt a re‐
10359 mote).
10360
10361 –crypt-remote
10362 Remote to encrypt/decrypt. Normally should contain a `:' and a path,
10363 eg “myremote:path/to/dir”, “myremote:bucket” or maybe “myremote:” (not
10364 recommended).
10365
10366 · Config: remote
10367
10368 · Env Var: RCLONE_CRYPT_REMOTE
10369
10370 · Type: string
10371
10372 · Default: ""
10373
10374 –crypt-filename-encryption
10375 How to encrypt the filenames.
10376
10377 · Config: filename_encryption
10378
10379 · Env Var: RCLONE_CRYPT_FILENAME_ENCRYPTION
10380
10381 · Type: string
10382
10383 · Default: “standard”
10384
10385 · Examples:
10386
10387 · “off”
10388
10389 · Don't encrypt the file names. Adds a “.bin” extension only.
10390
10391 · “standard”
10392
10393 · Encrypt the filenames see the docs for the details.
10394
10395 · “obfuscate”
10396
10397 · Very simple filename obfuscation.
10398
10399 –crypt-directory-name-encryption
10400 Option to either encrypt directory names or leave them intact.
10401
10402 · Config: directory_name_encryption
10403
10404 · Env Var: RCLONE_CRYPT_DIRECTORY_NAME_ENCRYPTION
10405
10406 · Type: bool
10407
10408 · Default: true
10409
10410 · Examples:
10411
10412 · “true”
10413
10414 · Encrypt directory names.
10415
10416 · “false”
10417
10418 · Don't encrypt directory names, leave them intact.
10419
10420 –crypt-password
10421 Password or pass phrase for encryption.
10422
10423 · Config: password
10424
10425 · Env Var: RCLONE_CRYPT_PASSWORD
10426
10427 · Type: string
10428
10429 · Default: ""
10430
10431 –crypt-password2
10432 Password or pass phrase for salt. Optional but recommended. Should be
10433 different to the previous password.
10434
10435 · Config: password2
10436
10437 · Env Var: RCLONE_CRYPT_PASSWORD2
10438
10439 · Type: string
10440
10441 · Default: ""
10442
10443 Advanced Options
10444 Here are the advanced options specific to crypt (Encrypt/Decrypt a re‐
10445 mote).
10446
10447 –crypt-show-mapping
10448 For all files listed show how the names encrypt.
10449
10450 If this flag is set then for each file that the remote is asked to
10451 list, it will log (at level INFO) a line stating the decrypted file
10452 name and the encrypted file name.
10453
10454 This is so you can work out which encrypted names are which decrypted
10455 names just in case you need to do something with the encrypted file
10456 names, or for debugging purposes.
10457
10458 · Config: show_mapping
10459
10460 · Env Var: RCLONE_CRYPT_SHOW_MAPPING
10461
10462 · Type: bool
10463
10464 · Default: false
10465
10466 Backing up a crypted remote
10467 If you wish to backup a crypted remote, it it recommended that you use
10468 rclone sync on the encrypted files, and make sure the passwords are the
10469 same in the new encrypted remote.
10470
10471 This will have the following advantages
10472
10473 · rclone sync will check the checksums while copying
10474
10475 · you can use rclone check between the encrypted remotes
10476
10477 · you don't decrypt and encrypt unnecessarily
10478
10479 For example, let's say you have your original remote at remote: with
10480 the encrypted version at eremote: with path remote:crypt. You would
10481 then set up the new remote remote2: and then the encrypted version ere‐
10482 mote2: with path remote2:crypt using the same passwords as eremote:.
10483
10484 To sync the two remotes you would do
10485
10486 rclone sync remote:crypt remote2:crypt
10487
10488 And to check the integrity you would do
10489
10490 rclone check remote:crypt remote2:crypt
10491
10492 File formats
10493 File encryption
10494 Files are encrypted 1:1 source file to destination object. The file
10495 has a header and is divided into chunks.
10496
10497 Header
10498 · 8 bytes magic string RCLONE\x00\x00
10499
10500 · 24 bytes Nonce (IV)
10501
10502 The initial nonce is generated from the operating systems crypto strong
10503 random number generator. The nonce is incremented for each chunk read
10504 making sure each nonce is unique for each block written. The chance of
10505 a nonce being re-used is minuscule. If you wrote an exabyte of data
10506 (10¹⁸ bytes) you would have a probability of approximately 2×10⁻³² of
10507 re-using a nonce.
10508
10509 Chunk
10510 Each chunk will contain 64kB of data, except for the last one which may
10511 have less data. The data chunk is in standard NACL secretbox format.
10512 Secretbox uses XSalsa20 and Poly1305 to encrypt and authenticate mes‐
10513 sages.
10514
10515 Each chunk contains:
10516
10517 · 16 Bytes of Poly1305 authenticator
10518
10519 · 1 - 65536 bytes XSalsa20 encrypted data
10520
10521 64k chunk size was chosen as the best performing chunk size (the au‐
10522 thenticator takes too much time below this and the performance drops
10523 off due to cache effects above this). Note that these chunks are
10524 buffered in memory so they can't be too big.
10525
10526 This uses a 32 byte (256 bit key) key derived from the user password.
10527
10528 Examples
10529 1 byte file will encrypt to
10530
10531 · 32 bytes header
10532
10533 · 17 bytes data chunk
10534
10535 49 bytes total
10536
10537 1MB (1048576 bytes) file will encrypt to
10538
10539 · 32 bytes header
10540
10541 · 16 chunks of 65568 bytes
10542
10543 1049120 bytes total (a 0.05% overhead). This is the overhead for big
10544 files.
10545
10546 Name encryption
10547 File names are encrypted segment by segment - the path is broken up in‐
10548 to / separated strings and these are encrypted individually.
10549
10550 File segments are padded using using PKCS#7 to a multiple of 16 bytes
10551 before encryption.
10552
10553 They are then encrypted with EME using AES with 256 bit key. EME
10554 (ECB-Mix-ECB) is a wide-block encryption mode presented in the 2003 pa‐
10555 per “A Parallelizable Enciphering Mode” by Halevi and Rogaway.
10556
10557 This makes for deterministic encryption which is what we want - the
10558 same filename must encrypt to the same thing otherwise we can't find it
10559 on the cloud storage system.
10560
10561 This means that
10562
10563 · filenames with the same name will encrypt the same
10564
10565 · filenames which start the same won't have a common prefix
10566
10567 This uses a 32 byte key (256 bits) and a 16 byte (128 bits) IV both of
10568 which are derived from the user password.
10569
10570 After encryption they are written out using a modified version of stan‐
10571 dard base32 encoding as described in RFC4648. The standard encoding is
10572 modified in two ways:
10573
10574 · it becomes lower case (no-one likes upper case filenames!)
10575
10576 · we strip the padding character =
10577
10578 base32 is used rather than the more efficient base64 so rclone can be
10579 used on case insensitive remotes (eg Windows, Amazon Drive).
10580
10581 Key derivation
10582 Rclone uses scrypt with parameters N=16384, r=8, p=1 with an optional
10583 user supplied salt (password2) to derive the 32+32+16 = 80 bytes of key
10584 material required. If the user doesn't supply a salt then rclone uses
10585 an internal one.
10586
10587 scrypt makes it impractical to mount a dictionary attack on rclone en‐
10588 crypted data. For full protection against this you should always use a
10589 salt.
10590
10591 Dropbox
10592 Paths are specified as remote:path
10593
10594 Dropbox paths may be as deep as required, eg remote:directory/subdirec‐
10595 tory.
10596
10597 The initial setup for dropbox involves getting a token from Dropbox
10598 which you need to do in your browser. rclone config walks you through
10599 it.
10600
10601 Here is an example of how to make a remote called remote. First run:
10602
10603 rclone config
10604
10605 This will guide you through an interactive setup process:
10606
10607 n) New remote
10608 d) Delete remote
10609 q) Quit config
10610 e/n/d/q> n
10611 name> remote
10612 Type of storage to configure.
10613 Choose a number from below, or type in your own value
10614 1 / Amazon Drive
10615 \ "amazon cloud drive"
10616 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
10617 \ "s3"
10618 3 / Backblaze B2
10619 \ "b2"
10620 4 / Dropbox
10621 \ "dropbox"
10622 5 / Encrypt/Decrypt a remote
10623 \ "crypt"
10624 6 / Google Cloud Storage (this is not Google Drive)
10625 \ "google cloud storage"
10626 7 / Google Drive
10627 \ "drive"
10628 8 / Hubic
10629 \ "hubic"
10630 9 / Local Disk
10631 \ "local"
10632 10 / Microsoft OneDrive
10633 \ "onedrive"
10634 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
10635 \ "swift"
10636 12 / SSH/SFTP Connection
10637 \ "sftp"
10638 13 / Yandex Disk
10639 \ "yandex"
10640 Storage> 4
10641 Dropbox App Key - leave blank normally.
10642 app_key>
10643 Dropbox App Secret - leave blank normally.
10644 app_secret>
10645 Remote config
10646 Please visit:
10647 https://www.dropbox.com/1/oauth2/authorize?client_id=XXXXXXXXXXXXXXX&response_type=code
10648 Enter the code: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXXXXXXXX
10649 --------------------
10650 [remote]
10651 app_key =
10652 app_secret =
10653 token = XXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXX_XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
10654 --------------------
10655 y) Yes this is OK
10656 e) Edit this remote
10657 d) Delete this remote
10658 y/e/d> y
10659
10660 You can then use it like this,
10661
10662 List directories in top level of your dropbox
10663
10664 rclone lsd remote:
10665
10666 List all the files in your dropbox
10667
10668 rclone ls remote:
10669
10670 To copy a local directory to a dropbox directory called backup
10671
10672 rclone copy /home/source remote:backup
10673
10674 Dropbox for business
10675 Rclone supports Dropbox for business and Team Folders.
10676
10677 When using Dropbox for business remote: and remote:path/to/file will
10678 refer to your personal folder.
10679
10680 If you wish to see Team Folders you must use a leading / in the path,
10681 so rclone lsd remote:/ will refer to the root and show you all Team
10682 Folders and your User Folder.
10683
10684 You can then use team folders like this remote:/TeamFolder and re‐
10685 mote:/TeamFolder/path/to/file.
10686
10687 A leading / for a Dropbox personal account will do nothing, but it will
10688 take an extra HTTP transaction so it should be avoided.
10689
10690 Modified time and Hashes
10691 Dropbox supports modified times, but the only way to set a modification
10692 time is to re-upload the file.
10693
10694 This means that if you uploaded your data with an older version of
10695 rclone which didn't support the v2 API and modified times, rclone will
10696 decide to upload all your old data to fix the modification times. If
10697 you don't want this to happen use --size-only or --checksum flag to
10698 stop it.
10699
10700 Dropbox supports its own hash type (https://www.dropbox.com/develop‐
10701 ers/reference/content-hash) which is checked for all transfers.
10702
10703 Standard Options
10704 Here are the standard options specific to dropbox (Dropbox).
10705
10706 –dropbox-client-id
10707 Dropbox App Client Id Leave blank normally.
10708
10709 · Config: client_id
10710
10711 · Env Var: RCLONE_DROPBOX_CLIENT_ID
10712
10713 · Type: string
10714
10715 · Default: ""
10716
10717 –dropbox-client-secret
10718 Dropbox App Client Secret Leave blank normally.
10719
10720 · Config: client_secret
10721
10722 · Env Var: RCLONE_DROPBOX_CLIENT_SECRET
10723
10724 · Type: string
10725
10726 · Default: ""
10727
10728 Advanced Options
10729 Here are the advanced options specific to dropbox (Dropbox).
10730
10731 –dropbox-chunk-size
10732 Upload chunk size. (< 150M).
10733
10734 Any files larger than this will be uploaded in chunks of this size.
10735
10736 Note that chunks are buffered in memory (one at a time) so rclone can
10737 deal with retries. Setting this larger will increase the speed slight‐
10738 ly (at most 10% for 128MB in tests) at the cost of using more memory.
10739 It can be set smaller if you are tight on memory.
10740
10741 · Config: chunk_size
10742
10743 · Env Var: RCLONE_DROPBOX_CHUNK_SIZE
10744
10745 · Type: SizeSuffix
10746
10747 · Default: 48M
10748
10749 –dropbox-impersonate
10750 Impersonate this user when using a business account.
10751
10752 · Config: impersonate
10753
10754 · Env Var: RCLONE_DROPBOX_IMPERSONATE
10755
10756 · Type: string
10757
10758 · Default: ""
10759
10760 Limitations
10761 Note that Dropbox is case insensitive so you can't have a file called
10762 “Hello.doc” and one called “hello.doc”.
10763
10764 There are some file names such as thumbs.db which Dropbox can't store.
10765 There is a full list of them in the “Ignored Files” section of this
10766 document (https://www.dropbox.com/en/help/145). Rclone will issue an
10767 error message File name disallowed - not uploading if it attempts to
10768 upload one of those file names, but the sync won't fail.
10769
10770 If you have more than 10,000 files in a directory then
10771 rclone purge dropbox:dir will return the error
10772 Failed to purge: There are too many files involved in this operation.
10773 As a work-around do an rclone delete dropbox:dir followed by an
10774 rclone rmdir dropbox:dir.
10775
10776 FTP
10777 FTP is the File Transfer Protocol. FTP support is provided using the
10778 github.com/jlaffaye/ftp (https://godoc.org/github.com/jlaffaye/ftp)
10779 package.
10780
10781 Here is an example of making an FTP configuration. First run
10782
10783 rclone config
10784
10785 This will guide you through an interactive setup process. An FTP re‐
10786 mote only needs a host together with and a username and a password.
10787 With anonymous FTP server, you will need to use anonymous as username
10788 and your email address as the password.
10789
10790 No remotes found - make a new one
10791 n) New remote
10792 r) Rename remote
10793 c) Copy remote
10794 s) Set configuration password
10795 q) Quit config
10796 n/r/c/s/q> n
10797 name> remote
10798 Type of storage to configure.
10799 Choose a number from below, or type in your own value
10800 1 / Amazon Drive
10801 \ "amazon cloud drive"
10802 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
10803 \ "s3"
10804 3 / Backblaze B2
10805 \ "b2"
10806 4 / Dropbox
10807 \ "dropbox"
10808 5 / Encrypt/Decrypt a remote
10809 \ "crypt"
10810 6 / FTP Connection
10811 \ "ftp"
10812 7 / Google Cloud Storage (this is not Google Drive)
10813 \ "google cloud storage"
10814 8 / Google Drive
10815 \ "drive"
10816 9 / Hubic
10817 \ "hubic"
10818 10 / Local Disk
10819 \ "local"
10820 11 / Microsoft OneDrive
10821 \ "onedrive"
10822 12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
10823 \ "swift"
10824 13 / SSH/SFTP Connection
10825 \ "sftp"
10826 14 / Yandex Disk
10827 \ "yandex"
10828 Storage> ftp
10829 FTP host to connect to
10830 Choose a number from below, or type in your own value
10831 1 / Connect to ftp.example.com
10832 \ "ftp.example.com"
10833 host> ftp.example.com
10834 FTP username, leave blank for current username, ncw
10835 user>
10836 FTP port, leave blank to use default (21)
10837 port>
10838 FTP password
10839 y) Yes type in my own password
10840 g) Generate random password
10841 y/g> y
10842 Enter the password:
10843 password:
10844 Confirm the password:
10845 password:
10846 Remote config
10847 --------------------
10848 [remote]
10849 host = ftp.example.com
10850 user =
10851 port =
10852 pass = *** ENCRYPTED ***
10853 --------------------
10854 y) Yes this is OK
10855 e) Edit this remote
10856 d) Delete this remote
10857 y/e/d> y
10858
10859 This remote is called remote and can now be used like this
10860
10861 See all directories in the home directory
10862
10863 rclone lsd remote:
10864
10865 Make a new directory
10866
10867 rclone mkdir remote:path/to/directory
10868
10869 List the contents of a directory
10870
10871 rclone ls remote:path/to/directory
10872
10873 Sync /home/local/directory to the remote directory, deleting any excess
10874 files in the directory.
10875
10876 rclone sync /home/local/directory remote:directory
10877
10878 Modified time
10879 FTP does not support modified times. Any times you see on the server
10880 will be time of upload.
10881
10882 Checksums
10883 FTP does not support any checksums.
10884
10885 Standard Options
10886 Here are the standard options specific to ftp (FTP Connection).
10887
10888 –ftp-host
10889 FTP host to connect to
10890
10891 · Config: host
10892
10893 · Env Var: RCLONE_FTP_HOST
10894
10895 · Type: string
10896
10897 · Default: ""
10898
10899 · Examples:
10900
10901 · “ftp.example.com”
10902
10903 · Connect to ftp.example.com
10904
10905 –ftp-user
10906 FTP username, leave blank for current username, ncw
10907
10908 · Config: user
10909
10910 · Env Var: RCLONE_FTP_USER
10911
10912 · Type: string
10913
10914 · Default: ""
10915
10916 –ftp-port
10917 FTP port, leave blank to use default (21)
10918
10919 · Config: port
10920
10921 · Env Var: RCLONE_FTP_PORT
10922
10923 · Type: string
10924
10925 · Default: ""
10926
10927 –ftp-pass
10928 FTP password
10929
10930 · Config: pass
10931
10932 · Env Var: RCLONE_FTP_PASS
10933
10934 · Type: string
10935
10936 · Default: ""
10937
10938 Advanced Options
10939 Here are the advanced options specific to ftp (FTP Connection).
10940
10941 –ftp-concurrency
10942 Maximum number of FTP simultaneous connections, 0 for unlimited
10943
10944 · Config: concurrency
10945
10946 · Env Var: RCLONE_FTP_CONCURRENCY
10947
10948 · Type: int
10949
10950 · Default: 0
10951
10952 Limitations
10953 Note that since FTP isn't HTTP based the following flags don't work
10954 with it: --dump-headers, --dump-bodies, --dump-auth
10955
10956 Note that --timeout isn't supported (but --contimeout is).
10957
10958 Note that --bind isn't supported.
10959
10960 FTP could support server side move but doesn't yet.
10961
10962 Note that the ftp backend does not support the ftp_proxy environment
10963 variable yet.
10964
10965 Google Cloud Storage
10966 Paths are specified as remote:bucket (or remote: for the lsd command.)
10967 You may put subdirectories in too, eg remote:bucket/path/to/dir.
10968
10969 The initial setup for google cloud storage involves getting a token
10970 from Google Cloud Storage which you need to do in your browser.
10971 rclone config walks you through it.
10972
10973 Here is an example of how to make a remote called remote. First run:
10974
10975 rclone config
10976
10977 This will guide you through an interactive setup process:
10978
10979 n) New remote
10980 d) Delete remote
10981 q) Quit config
10982 e/n/d/q> n
10983 name> remote
10984 Type of storage to configure.
10985 Choose a number from below, or type in your own value
10986 1 / Amazon Drive
10987 \ "amazon cloud drive"
10988 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
10989 \ "s3"
10990 3 / Backblaze B2
10991 \ "b2"
10992 4 / Dropbox
10993 \ "dropbox"
10994 5 / Encrypt/Decrypt a remote
10995 \ "crypt"
10996 6 / Google Cloud Storage (this is not Google Drive)
10997 \ "google cloud storage"
10998 7 / Google Drive
10999 \ "drive"
11000 8 / Hubic
11001 \ "hubic"
11002 9 / Local Disk
11003 \ "local"
11004 10 / Microsoft OneDrive
11005 \ "onedrive"
11006 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
11007 \ "swift"
11008 12 / SSH/SFTP Connection
11009 \ "sftp"
11010 13 / Yandex Disk
11011 \ "yandex"
11012 Storage> 6
11013 Google Application Client Id - leave blank normally.
11014 client_id>
11015 Google Application Client Secret - leave blank normally.
11016 client_secret>
11017 Project number optional - needed only for list/create/delete buckets - see your developer console.
11018 project_number> 12345678
11019 Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login.
11020 service_account_file>
11021 Access Control List for new objects.
11022 Choose a number from below, or type in your own value
11023 1 / Object owner gets OWNER access, and all Authenticated Users get READER access.
11024 \ "authenticatedRead"
11025 2 / Object owner gets OWNER access, and project team owners get OWNER access.
11026 \ "bucketOwnerFullControl"
11027 3 / Object owner gets OWNER access, and project team owners get READER access.
11028 \ "bucketOwnerRead"
11029 4 / Object owner gets OWNER access [default if left blank].
11030 \ "private"
11031 5 / Object owner gets OWNER access, and project team members get access according to their roles.
11032 \ "projectPrivate"
11033 6 / Object owner gets OWNER access, and all Users get READER access.
11034 \ "publicRead"
11035 object_acl> 4
11036 Access Control List for new buckets.
11037 Choose a number from below, or type in your own value
11038 1 / Project team owners get OWNER access, and all Authenticated Users get READER access.
11039 \ "authenticatedRead"
11040 2 / Project team owners get OWNER access [default if left blank].
11041 \ "private"
11042 3 / Project team members get access according to their roles.
11043 \ "projectPrivate"
11044 4 / Project team owners get OWNER access, and all Users get READER access.
11045 \ "publicRead"
11046 5 / Project team owners get OWNER access, and all Users get WRITER access.
11047 \ "publicReadWrite"
11048 bucket_acl> 2
11049 Location for the newly created buckets.
11050 Choose a number from below, or type in your own value
11051 1 / Empty for default location (US).
11052 \ ""
11053 2 / Multi-regional location for Asia.
11054 \ "asia"
11055 3 / Multi-regional location for Europe.
11056 \ "eu"
11057 4 / Multi-regional location for United States.
11058 \ "us"
11059 5 / Taiwan.
11060 \ "asia-east1"
11061 6 / Tokyo.
11062 \ "asia-northeast1"
11063 7 / Singapore.
11064 \ "asia-southeast1"
11065 8 / Sydney.
11066 \ "australia-southeast1"
11067 9 / Belgium.
11068 \ "europe-west1"
11069 10 / London.
11070 \ "europe-west2"
11071 11 / Iowa.
11072 \ "us-central1"
11073 12 / South Carolina.
11074 \ "us-east1"
11075 13 / Northern Virginia.
11076 \ "us-east4"
11077 14 / Oregon.
11078 \ "us-west1"
11079 location> 12
11080 The storage class to use when storing objects in Google Cloud Storage.
11081 Choose a number from below, or type in your own value
11082 1 / Default
11083 \ ""
11084 2 / Multi-regional storage class
11085 \ "MULTI_REGIONAL"
11086 3 / Regional storage class
11087 \ "REGIONAL"
11088 4 / Nearline storage class
11089 \ "NEARLINE"
11090 5 / Coldline storage class
11091 \ "COLDLINE"
11092 6 / Durable reduced availability storage class
11093 \ "DURABLE_REDUCED_AVAILABILITY"
11094 storage_class> 5
11095 Remote config
11096 Use auto config?
11097 * Say Y if not sure
11098 * Say N if you are working on a remote or headless machine or Y didn't work
11099 y) Yes
11100 n) No
11101 y/n> y
11102 If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
11103 Log in and authorize rclone for access
11104 Waiting for code...
11105 Got code
11106 --------------------
11107 [remote]
11108 type = google cloud storage
11109 client_id =
11110 client_secret =
11111 token = {"AccessToken":"xxxx.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"x/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx_xxxxxxxxx","Expiry":"2014-07-17T20:49:14.929208288+01:00","Extra":null}
11112 project_number = 12345678
11113 object_acl = private
11114 bucket_acl = private
11115 --------------------
11116 y) Yes this is OK
11117 e) Edit this remote
11118 d) Delete this remote
11119 y/e/d> y
11120
11121 Note that rclone runs a webserver on your local machine to collect the
11122 token as returned from Google if you use auto config mode. This only
11123 runs from the moment it opens your browser to the moment you get back
11124 the verification code. This is on http://127.0.0.1:53682/ and this it
11125 may require you to unblock it temporarily if you are running a host
11126 firewall, or use manual mode.
11127
11128 This remote is called remote and can now be used like this
11129
11130 See all the buckets in your project
11131
11132 rclone lsd remote:
11133
11134 Make a new bucket
11135
11136 rclone mkdir remote:bucket
11137
11138 List the contents of a bucket
11139
11140 rclone ls remote:bucket
11141
11142 Sync /home/local/directory to the remote bucket, deleting any excess
11143 files in the bucket.
11144
11145 rclone sync /home/local/directory remote:bucket
11146
11147 Service Account support
11148 You can set up rclone with Google Cloud Storage in an unattended mode,
11149 i.e. not tied to a specific end-user Google account. This is useful
11150 when you want to synchronise files onto machines that don't have ac‐
11151 tively logged-in users, for example build machines.
11152
11153 To get credentials for Google Cloud Platform IAM Service Accounts
11154 (https://cloud.google.com/iam/docs/service-accounts), please head to
11155 the Service Account (https://console.cloud.google.com/permissions/ser‐
11156 viceaccounts) section of the Google Developer Console. Service Ac‐
11157 counts behave just like normal User permissions in Google Cloud Storage
11158 ACLs (https://cloud.google.com/storage/docs/access-control), so you can
11159 limit their access (e.g. make them read only). After creating an ac‐
11160 count, a JSON file containing the Service Account's credentials will be
11161 downloaded onto your machines. These credentials are what rclone will
11162 use for authentication.
11163
11164 To use a Service Account instead of OAuth2 token flow, enter the path
11165 to your Service Account credentials at the service_account_file prompt
11166 and rclone won't use the browser based authentication flow. If you'd
11167 rather stuff the contents of the credentials file into the rclone con‐
11168 fig file, you can set service_account_credentials with the actual con‐
11169 tents of the file instead, or set the equivalent environment variable.
11170
11171 Application Default Credentials
11172 If no other source of credentials is provided, rclone will fall back to
11173 Application Default Credentials (https://cloud.google.com/video-intel‐
11174 ligence/docs/common/auth#authenticating_with_application_default_cre‐
11175 dentials) this is useful both when you already have configured authen‐
11176 tication for your developer account, or in production when running on a
11177 google compute host. Note that if running in docker, you may need to
11178 run additional commands on your google compute machine - see this page
11179 (https://cloud.google.com/container-registry/docs/advanced-authentica‐
11180 tion#gcloud_as_a_docker_credential_helper).
11181
11182 Note that in the case application default credentials are used, there
11183 is no need to explicitly configure a project number.
11184
11185 –fast-list
11186 This remote supports --fast-list which allows you to use fewer transac‐
11187 tions in exchange for more memory. See the rclone docs (/docs/#fast-
11188 list) for more details.
11189
11190 Modified time
11191 Google google cloud storage stores md5sums natively and rclone stores
11192 modification times as metadata on the object, under the “mtime” key in
11193 RFC3339 format accurate to 1ns.
11194
11195 Standard Options
11196 Here are the standard options specific to google cloud storage (Google
11197 Cloud Storage (this is not Google Drive)).
11198
11199 –gcs-client-id
11200 Google Application Client Id Leave blank normally.
11201
11202 · Config: client_id
11203
11204 · Env Var: RCLONE_GCS_CLIENT_ID
11205
11206 · Type: string
11207
11208 · Default: ""
11209
11210 –gcs-client-secret
11211 Google Application Client Secret Leave blank normally.
11212
11213 · Config: client_secret
11214
11215 · Env Var: RCLONE_GCS_CLIENT_SECRET
11216
11217 · Type: string
11218
11219 · Default: ""
11220
11221 –gcs-project-number
11222 Project number. Optional - needed only for list/create/delete buckets
11223 - see your developer console.
11224
11225 · Config: project_number
11226
11227 · Env Var: RCLONE_GCS_PROJECT_NUMBER
11228
11229 · Type: string
11230
11231 · Default: ""
11232
11233 –gcs-service-account-file
11234 Service Account Credentials JSON file path Leave blank normally. Need‐
11235 ed only if you want use SA instead of interactive login.
11236
11237 · Config: service_account_file
11238
11239 · Env Var: RCLONE_GCS_SERVICE_ACCOUNT_FILE
11240
11241 · Type: string
11242
11243 · Default: ""
11244
11245 –gcs-service-account-credentials
11246 Service Account Credentials JSON blob Leave blank normally. Needed on‐
11247 ly if you want use SA instead of interactive login.
11248
11249 · Config: service_account_credentials
11250
11251 · Env Var: RCLONE_GCS_SERVICE_ACCOUNT_CREDENTIALS
11252
11253 · Type: string
11254
11255 · Default: ""
11256
11257 –gcs-object-acl
11258 Access Control List for new objects.
11259
11260 · Config: object_acl
11261
11262 · Env Var: RCLONE_GCS_OBJECT_ACL
11263
11264 · Type: string
11265
11266 · Default: ""
11267
11268 · Examples:
11269
11270 · “authenticatedRead”
11271
11272 · Object owner gets OWNER access, and all Authenticated Users get
11273 READER access.
11274
11275 · “bucketOwnerFullControl”
11276
11277 · Object owner gets OWNER access, and project team owners get OWNER
11278 access.
11279
11280 · “bucketOwnerRead”
11281
11282 · Object owner gets OWNER access, and project team owners get READ‐
11283 ER access.
11284
11285 · “private”
11286
11287 · Object owner gets OWNER access [default if left blank].
11288
11289 · “projectPrivate”
11290
11291 · Object owner gets OWNER access, and project team members get ac‐
11292 cess according to their roles.
11293
11294 · “publicRead”
11295
11296 · Object owner gets OWNER access, and all Users get READER access.
11297
11298 –gcs-bucket-acl
11299 Access Control List for new buckets.
11300
11301 · Config: bucket_acl
11302
11303 · Env Var: RCLONE_GCS_BUCKET_ACL
11304
11305 · Type: string
11306
11307 · Default: ""
11308
11309 · Examples:
11310
11311 · “authenticatedRead”
11312
11313 · Project team owners get OWNER access, and all Authenticated Users
11314 get READER access.
11315
11316 · “private”
11317
11318 · Project team owners get OWNER access [default if left blank].
11319
11320 · “projectPrivate”
11321
11322 · Project team members get access according to their roles.
11323
11324 · “publicRead”
11325
11326 · Project team owners get OWNER access, and all Users get READER
11327 access.
11328
11329 · “publicReadWrite”
11330
11331 · Project team owners get OWNER access, and all Users get WRITER
11332 access.
11333
11334 –gcs-bucket-policy-only
11335 Access checks should use bucket-level IAM policies.
11336
11337 If you want to upload objects to a bucket with Bucket Policy Only set
11338 then you will need to set this.
11339
11340 When it is set, rclone:
11341
11342 · ignores ACLs set on buckets
11343
11344 · ignores ACLs set on objects
11345
11346 · creates buckets with Bucket Policy Only set
11347
11348 Docs: https://cloud.google.com/storage/docs/bucket-policy-only
11349
11350 · Config: bucket_policy_only
11351
11352 · Env Var: RCLONE_GCS_BUCKET_POLICY_ONLY
11353
11354 · Type: bool
11355
11356 · Default: false
11357
11358 –gcs-location
11359 Location for the newly created buckets.
11360
11361 · Config: location
11362
11363 · Env Var: RCLONE_GCS_LOCATION
11364
11365 · Type: string
11366
11367 · Default: ""
11368
11369 · Examples:
11370
11371 · ""
11372
11373 · Empty for default location (US).
11374
11375 · “asia”
11376
11377 · Multi-regional location for Asia.
11378
11379 · “eu”
11380
11381 · Multi-regional location for Europe.
11382
11383 · “us”
11384
11385 · Multi-regional location for United States.
11386
11387 · “asia-east1”
11388
11389 · Taiwan.
11390
11391 · “asia-east2”
11392
11393 · Hong Kong.
11394
11395 · “asia-northeast1”
11396
11397 · Tokyo.
11398
11399 · “asia-south1”
11400
11401 · Mumbai.
11402
11403 · “asia-southeast1”
11404
11405 · Singapore.
11406
11407 · “australia-southeast1”
11408
11409 · Sydney.
11410
11411 · “europe-north1”
11412
11413 · Finland.
11414
11415 · “europe-west1”
11416
11417 · Belgium.
11418
11419 · “europe-west2”
11420
11421 · London.
11422
11423 · “europe-west3”
11424
11425 · Frankfurt.
11426
11427 · “europe-west4”
11428
11429 · Netherlands.
11430
11431 · “us-central1”
11432
11433 · Iowa.
11434
11435 · “us-east1”
11436
11437 · South Carolina.
11438
11439 · “us-east4”
11440
11441 · Northern Virginia.
11442
11443 · “us-west1”
11444
11445 · Oregon.
11446
11447 · “us-west2”
11448
11449 · California.
11450
11451 –gcs-storage-class
11452 The storage class to use when storing objects in Google Cloud Storage.
11453
11454 · Config: storage_class
11455
11456 · Env Var: RCLONE_GCS_STORAGE_CLASS
11457
11458 · Type: string
11459
11460 · Default: ""
11461
11462 · Examples:
11463
11464 · ""
11465
11466 · Default
11467
11468 · “MULTI_REGIONAL”
11469
11470 · Multi-regional storage class
11471
11472 · “REGIONAL”
11473
11474 · Regional storage class
11475
11476 · “NEARLINE”
11477
11478 · Nearline storage class
11479
11480 · “COLDLINE”
11481
11482 · Coldline storage class
11483
11484 · “DURABLE_REDUCED_AVAILABILITY”
11485
11486 · Durable reduced availability storage class
11487
11488 Google Drive
11489 Paths are specified as drive:path
11490
11491 Drive paths may be as deep as required, eg drive:directory/subdirecto‐
11492 ry.
11493
11494 The initial setup for drive involves getting a token from Google drive
11495 which you need to do in your browser. rclone config walks you through
11496 it.
11497
11498 Here is an example of how to make a remote called remote. First run:
11499
11500 rclone config
11501
11502 This will guide you through an interactive setup process:
11503
11504 No remotes found - make a new one
11505 n) New remote
11506 r) Rename remote
11507 c) Copy remote
11508 s) Set configuration password
11509 q) Quit config
11510 n/r/c/s/q> n
11511 name> remote
11512 Type of storage to configure.
11513 Choose a number from below, or type in your own value
11514 [snip]
11515 10 / Google Drive
11516 \ "drive"
11517 [snip]
11518 Storage> drive
11519 Google Application Client Id - leave blank normally.
11520 client_id>
11521 Google Application Client Secret - leave blank normally.
11522 client_secret>
11523 Scope that rclone should use when requesting access from drive.
11524 Choose a number from below, or type in your own value
11525 1 / Full access all files, excluding Application Data Folder.
11526 \ "drive"
11527 2 / Read-only access to file metadata and file contents.
11528 \ "drive.readonly"
11529 / Access to files created by rclone only.
11530 3 | These are visible in the drive website.
11531 | File authorization is revoked when the user deauthorizes the app.
11532 \ "drive.file"
11533 / Allows read and write access to the Application Data folder.
11534 4 | This is not visible in the drive website.
11535 \ "drive.appfolder"
11536 / Allows read-only access to file metadata but
11537 5 | does not allow any access to read or download file content.
11538 \ "drive.metadata.readonly"
11539 scope> 1
11540 ID of the root folder - leave blank normally. Fill in to access "Computers" folders. (see docs).
11541 root_folder_id>
11542 Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login.
11543 service_account_file>
11544 Remote config
11545 Use auto config?
11546 * Say Y if not sure
11547 * Say N if you are working on a remote or headless machine or Y didn't work
11548 y) Yes
11549 n) No
11550 y/n> y
11551 If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
11552 Log in and authorize rclone for access
11553 Waiting for code...
11554 Got code
11555 Configure this as a team drive?
11556 y) Yes
11557 n) No
11558 y/n> n
11559 --------------------
11560 [remote]
11561 client_id =
11562 client_secret =
11563 scope = drive
11564 root_folder_id =
11565 service_account_file =
11566 token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2014-03-16T13:57:58.955387075Z"}
11567 --------------------
11568 y) Yes this is OK
11569 e) Edit this remote
11570 d) Delete this remote
11571 y/e/d> y
11572
11573 Note that rclone runs a webserver on your local machine to collect the
11574 token as returned from Google if you use auto config mode. This only
11575 runs from the moment it opens your browser to the moment you get back
11576 the verification code. This is on http://127.0.0.1:53682/ and this it
11577 may require you to unblock it temporarily if you are running a host
11578 firewall, or use manual mode.
11579
11580 You can then use it like this,
11581
11582 List directories in top level of your drive
11583
11584 rclone lsd remote:
11585
11586 List all the files in your drive
11587
11588 rclone ls remote:
11589
11590 To copy a local directory to a drive directory called backup
11591
11592 rclone copy /home/source remote:backup
11593
11594 Scopes
11595 Rclone allows you to select which scope you would like for rclone to
11596 use. This changes what type of token is granted to rclone. The scopes
11597 are defined here. (https://developers.google.com/drive/v3/web/about-
11598 auth).
11599
11600 The scope are
11601
11602 drive
11603 This is the default scope and allows full access to all files, except
11604 for the Application Data Folder (see below).
11605
11606 Choose this one if you aren't sure.
11607
11608 drive.readonly
11609 This allows read only access to all files. Files may be listed and
11610 downloaded but not uploaded, renamed or deleted.
11611
11612 drive.file
11613 With this scope rclone can read/view/modify only those files and fold‐
11614 ers it creates.
11615
11616 So if you uploaded files to drive via the web interface (or any other
11617 means) they will not be visible to rclone.
11618
11619 This can be useful if you are using rclone to backup data and you want
11620 to be sure confidential data on your drive is not visible to rclone.
11621
11622 Files created with this scope are visible in the web interface.
11623
11624 drive.appfolder
11625 This gives rclone its own private area to store files. Rclone will not
11626 be able to see any other files on your drive and you won't be able to
11627 see rclone's files from the web interface either.
11628
11629 drive.metadata.readonly
11630 This allows read only access to file names only. It does not allow
11631 rclone to download or upload data, or rename or delete files or direc‐
11632 tories.
11633
11634 Root folder ID
11635 You can set the root_folder_id for rclone. This is the directory
11636 (identified by its Folder ID) that rclone considers to be the root of
11637 your drive.
11638
11639 Normally you will leave this blank and rclone will determine the cor‐
11640 rect root to use itself.
11641
11642 However you can set this to restrict rclone to a specific folder hier‐
11643 archy or to access data within the “Computers” tab on the drive web in‐
11644 terface (where files from Google's Backup and Sync desktop program go).
11645
11646 In order to do this you will have to find the Folder ID of the directo‐
11647 ry you wish rclone to display. This will be the last segment of the
11648 URL when you open the relevant folder in the drive web interface.
11649
11650 So if the folder you want rclone to use has a URL which looks like
11651 https://drive.google.com/drive/fold‐
11652 ers/1XyfxxxxxxxxxxxxxxxxxxxxxxxxxKHCh in the browser, then you use
11653 1XyfxxxxxxxxxxxxxxxxxxxxxxxxxKHCh as the root_folder_id in the config.
11654
11655 NB folders under the “Computers” tab seem to be read only (drive gives
11656 a 500 error) when using rclone.
11657
11658 There doesn't appear to be an API to discover the folder IDs of the
11659 “Computers” tab - please contact us if you know otherwise!
11660
11661 Note also that rclone can't access any data under the “Backups” tab on
11662 the google drive web interface yet.
11663
11664 Service Account support
11665 You can set up rclone with Google Drive in an unattended mode, i.e. not
11666 tied to a specific end-user Google account. This is useful when you
11667 want to synchronise files onto machines that don't have actively
11668 logged-in users, for example build machines.
11669
11670 To use a Service Account instead of OAuth2 token flow, enter the path
11671 to your Service Account credentials at the service_account_file prompt
11672 during rclone config and rclone won't use the browser based authentica‐
11673 tion flow. If you'd rather stuff the contents of the credentials file
11674 into the rclone config file, you can set service_account_credentials
11675 with the actual contents of the file instead, or set the equivalent en‐
11676 vironment variable.
11677
11678 Use case - Google Apps/G-suite account and individual Drive
11679 Let's say that you are the administrator of a Google Apps (old) or
11680 G-suite account. The goal is to store data on an individual's Drive
11681 account, who IS a member of the domain. We'll call the domain exam‐
11682 ple.com, and the user foo@example.com.
11683
11684 There's a few steps we need to go through to accomplish this:
11685
11686 1. Create a service account for example.com
11687 · To create a service account and obtain its credentials, go to the
11688 Google Developer Console (https://console.developers.google.com).
11689
11690 · You must have a project - create one if you don't.
11691
11692 · Then go to “IAM & admin” -> “Service Accounts”.
11693
11694 · Use the “Create Credentials” button. Fill in “Service account name”
11695 with something that identifies your client. “Role” can be empty.
11696
11697 · Tick “Furnish a new private key” - select “Key type JSON”.
11698
11699 · Tick “Enable G Suite Domain-wide Delegation”. This option makes “im‐
11700 personation” possible, as documented here: Delegating domain-wide au‐
11701 thority to the service account (https://developers.google.com/identi‐
11702 ty/protocols/OAuth2ServiceAccount#delegatingauthority)
11703
11704 · These credentials are what rclone will use for authentication. If
11705 you ever need to remove access, press the “Delete service account
11706 key” button.
11707
11708 2. Allowing API access to example.com Google Drive
11709 · Go to example.com's admin console
11710
11711 · Go into “Security” (or use the search bar)
11712
11713 · Select “Show more” and then “Advanced settings”
11714
11715 · Select “Manage API client access” in the “Authentication” section
11716
11717 · In the “Client Name” field enter the service account's “Client ID” -
11718 this can be found in the Developer Console under “IAM & Admin” ->
11719 “Service Accounts”, then “View Client ID” for the newly created ser‐
11720 vice account. It is a ~21 character numerical string.
11721
11722 · In the next field, “One or More API Scopes”, enter
11723 https://www.googleapis.com/auth/drive to grant access to Google Drive
11724 specifically.
11725
11726 3. Configure rclone, assuming a new install
11727 rclone config
11728
11729 n/s/q> n # New
11730 name>gdrive # Gdrive is an example name
11731 Storage> # Select the number shown for Google Drive
11732 client_id> # Can be left blank
11733 client_secret> # Can be left blank
11734 scope> # Select your scope, 1 for example
11735 root_folder_id> # Can be left blank
11736 service_account_file> /home/foo/myJSONfile.json # This is where the JSON file goes!
11737 y/n> # Auto config, y
11738
11739 4. Verify that it's working
11740 · rclone -v --drive-impersonate foo@example.com lsf gdrive:backup
11741
11742 · The arguments do:
11743
11744 · -v - verbose logging
11745
11746 · --drive-impersonate foo@example.com - this is what does the magic,
11747 pretending to be user foo.
11748
11749 · lsf - list files in a parsing friendly way
11750
11751 · gdrive:backup - use the remote called gdrive, work in the folder
11752 named backup.
11753
11754 Team drives
11755 If you want to configure the remote to point to a Google Team Drive
11756 then answer y to the question Configure this as a team drive?.
11757
11758 This will fetch the list of Team Drives from google and allow you to
11759 configure which one you want to use. You can also type in a team drive
11760 ID if you prefer.
11761
11762 For example:
11763
11764 Configure this as a team drive?
11765 y) Yes
11766 n) No
11767 y/n> y
11768 Fetching team drive list...
11769 Choose a number from below, or type in your own value
11770 1 / Rclone Test
11771 \ "xxxxxxxxxxxxxxxxxxxx"
11772 2 / Rclone Test 2
11773 \ "yyyyyyyyyyyyyyyyyyyy"
11774 3 / Rclone Test 3
11775 \ "zzzzzzzzzzzzzzzzzzzz"
11776 Enter a Team Drive ID> 1
11777 --------------------
11778 [remote]
11779 client_id =
11780 client_secret =
11781 token = {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014-03-16T13:57:58.955387075Z","Extra":null}
11782 team_drive = xxxxxxxxxxxxxxxxxxxx
11783 --------------------
11784 y) Yes this is OK
11785 e) Edit this remote
11786 d) Delete this remote
11787 y/e/d> y
11788
11789 –fast-list
11790 This remote supports --fast-list which allows you to use fewer transac‐
11791 tions in exchange for more memory. See the rclone docs (/docs/#fast-
11792 list) for more details.
11793
11794 It does this by combining multiple list calls into a single API re‐
11795 quest.
11796
11797 This works by combining many '%s' in parents filters into one expres‐
11798 sion. To list the contents of directories a, b and c, the following
11799 requests will be send by the regular List function:
11800
11801 trashed=false and 'a' in parents
11802 trashed=false and 'b' in parents
11803 trashed=false and 'c' in parents
11804
11805 These can now be combined into a single request:
11806
11807 trashed=false and ('a' in parents or 'b' in parents or 'c' in parents)
11808
11809 The implementation of ListR will put up to 50 parents filters into one
11810 request. It will use the --checkers value to specify the number of re‐
11811 quests to run in parallel.
11812
11813 In tests, these batch requests were up to 20x faster than the regular
11814 method. Running the following command against different sized folders
11815 gives:
11816
11817 rclone lsjson -vv -R --checkers=6 gdrive:folder
11818
11819 small folder (220 directories, 700 files):
11820
11821 · without --fast-list: 38s
11822
11823 · with --fast-list: 10s
11824
11825 large folder (10600 directories, 39000 files):
11826
11827 · without --fast-list: 22:05 min
11828
11829 · with --fast-list: 58s
11830
11831 Modified time
11832 Google drive stores modification times accurate to 1 ms.
11833
11834 Revisions
11835 Google drive stores revisions of files. When you upload a change to an
11836 existing file to google drive using rclone it will create a new revi‐
11837 sion of that file.
11838
11839 Revisions follow the standard google policy which at time of writing
11840 was
11841
11842 · They are deleted after 30 days or 100 revisions (whatever comes
11843 first).
11844
11845 · They do not count towards a user storage quota.
11846
11847 Deleting files
11848 By default rclone will send all files to the trash when deleting files.
11849 If deleting them permanently is required then use the
11850 --drive-use-trash=false flag, or set the equivalent environment vari‐
11851 able.
11852
11853 Emptying trash
11854 If you wish to empty your trash you can use the rclone cleanup remote:
11855 command which will permanently delete all your trashed files. This
11856 command does not take any path arguments.
11857
11858 Quota information
11859 To view your current quota you can use the rclone about remote: command
11860 which will display your usage limit (quota), the usage in Google Drive,
11861 the size of all files in the Trash and the space used by other Google
11862 services such as Gmail. This command does not take any path arguments.
11863
11864 Import/Export of google documents
11865 Google documents can be exported from and uploaded to Google Drive.
11866
11867 When rclone downloads a Google doc it chooses a format to download de‐
11868 pending upon the --drive-export-formats setting. By default the export
11869 formats are docx,xlsx,pptx,svg which are a sensible default for an ed‐
11870 itable document.
11871
11872 When choosing a format, rclone runs down the list provided in order and
11873 chooses the first file format the doc can be exported as from the list.
11874 If the file can't be exported to a format on the formats list, then
11875 rclone will choose a format from the default list.
11876
11877 If you prefer an archive copy then you might use --drive-export-for‐
11878 mats pdf, or if you prefer openoffice/libreoffice formats you might use
11879 --drive-export-formats ods,odt,odp.
11880
11881 Note that rclone adds the extension to the google doc, so if it is
11882 called My Spreadsheet on google docs, it will be exported as My Spread‐
11883 sheet.xlsx or My Spreadsheet.pdf etc.
11884
11885 When importing files into Google Drive, rclone will conververt all
11886 files with an extension in --drive-import-formats to their associated
11887 document type. rclone will not convert any files by default, since the
11888 conversion is lossy process.
11889
11890 The conversion must result in a file with the same extension when the
11891 --drive-export-formats rules are applied to the uploaded document.
11892
11893 Here are some examples for allowed and prohibited conversions.
11894
11895 export-for‐ import-for‐ Upload Ext Document Ext Allowed
11896 mats mats
11897 ────────────────────────────────────────────────────────────────
11898 odt odt odt odt Yes
11899 odt docx,odt odt odt Yes
11900 docx docx docx Yes
11901 odt odt docx No
11902 odt,docx docx,odt docx odt No
11903 docx,odt docx,odt docx docx Yes
11904 docx,odt docx,odt odt docx No
11905
11906 This limitation can be disabled by specifying --drive-allow-im‐
11907 port-name-change. When using this flag, rclone can convert multiple
11908 files types resulting in the same document type at once, eg with
11909 --drive-import-formats docx,odt,txt, all files having these extension
11910 would result in a document represented as a docx file. This brings the
11911 additional risk of overwriting a document, if multiple files have the
11912 same stem. Many rclone operations will not handle this name change in
11913 any way. They assume an equal name when copying files and might copy
11914 the file again or delete them when the name changes.
11915
11916 Here are the possible export extensions with their corresponding mime
11917 types. Most of these can also be used for importing, but there more
11918 that are not listed here. Some of these additional ones might only be
11919 available when the operating system provides the correct MIME type en‐
11920 tries.
11921
11922 This list can be changed by Google Drive at any time and might not rep‐
11923 resent the currently available conversions.
11924
11925 Extension Mime Type Description
11926 ─────────────────────────────────────────────────────────────────────────────────────
11927 csv text/csv Standard CSV format for
11928 Spreadsheets
11929 docx application/vnd.openxml‐ Microsoft Office Document
11930 formats-officedocu‐
11931 ment.wordprocess‐
11932 ingml.document
11933 epub application/epub+zip E-book format
11934 html text/html An HTML Document
11935 jpg image/jpeg A JPEG Image File
11936 json applica‐ JSON Text Format
11937 tion/vnd.google-apps.script+json
11938 odp application/vnd.oasis.opendocu‐ Openoffice Presentation
11939 ment.presentation
11940 ods application/vnd.oasis.opendocu‐ Openoffice Spreadsheet
11941 ment.spreadsheet
11942
11943
11944 ods application/x-vnd.oasis.opendoc‐ Openoffice Spreadsheet
11945 ument.spreadsheet
11946 odt application/vnd.oasis.opendocu‐ Openoffice Document
11947 ment.text
11948 pdf application/pdf Adobe PDF Format
11949 png image/png PNG Image Format
11950 pptx application/vnd.openxmlfor‐ Microsoft Office Power‐
11951 mats-officedocument.presenta‐ point
11952 tionml.presentation
11953 rtf application/rtf Rich Text Format
11954 svg image/svg+xml Scalable Vector Graphics
11955 Format
11956 tsv text/tab-separated-values Standard TSV format for
11957 spreadsheets
11958 txt text/plain Plain Text
11959 xlsx application/vnd.openxmlfor‐ Microsoft Office Spread‐
11960 mats-officedocument.spread‐ sheet
11961 sheetml.sheet
11962 zip application/zip A ZIP file of HTML, Images
11963 CSS
11964
11965 Google documents can also be exported as link files. These files will
11966 open a browser window for the Google Docs website of that document when
11967 opened. The link file extension has to be specified as a --drive-ex‐
11968 port-formats parameter. They will match all available Google Docu‐
11969 ments.
11970
11971 Extension Description OS Support
11972 ─────────────────────────────────────────────────
11973 desktop freedesktop.org Linux
11974 specified desktop
11975 entry
11976 link.html An HTML Document All
11977 with a redirect
11978 url INI style link file macOS, Windows
11979 webloc macOS specific XML macOS
11980 format
11981
11982 Standard Options
11983 Here are the standard options specific to drive (Google Drive).
11984
11985 –drive-client-id
11986 Google Application Client Id Setting your own is recommended. See
11987 https://rclone.org/drive/#making-your-own-client-id for how to create
11988 your own. If you leave this blank, it will use an internal key which
11989 is low performance.
11990
11991 · Config: client_id
11992
11993 · Env Var: RCLONE_DRIVE_CLIENT_ID
11994
11995 · Type: string
11996
11997 · Default: ""
11998
11999 –drive-client-secret
12000 Google Application Client Secret Setting your own is recommended.
12001
12002 · Config: client_secret
12003
12004 · Env Var: RCLONE_DRIVE_CLIENT_SECRET
12005
12006 · Type: string
12007
12008 · Default: ""
12009
12010 –drive-scope
12011 Scope that rclone should use when requesting access from drive.
12012
12013 · Config: scope
12014
12015 · Env Var: RCLONE_DRIVE_SCOPE
12016
12017 · Type: string
12018
12019 · Default: ""
12020
12021 · Examples:
12022
12023 · “drive”
12024
12025 · Full access all files, excluding Application Data Folder.
12026
12027 · “drive.readonly”
12028
12029 · Read-only access to file metadata and file contents.
12030
12031 · “drive.file”
12032
12033 · Access to files created by rclone only.
12034
12035 · These are visible in the drive website.
12036
12037 · File authorization is revoked when the user deauthorizes the app.
12038
12039 · “drive.appfolder”
12040
12041 · Allows read and write access to the Application Data folder.
12042
12043 · This is not visible in the drive website.
12044
12045 · “drive.metadata.readonly”
12046
12047 · Allows read-only access to file metadata but
12048
12049 · does not allow any access to read or download file content.
12050
12051 –drive-root-folder-id
12052 ID of the root folder Leave blank normally. Fill in to access “Comput‐
12053 ers” folders. (see docs).
12054
12055 · Config: root_folder_id
12056
12057 · Env Var: RCLONE_DRIVE_ROOT_FOLDER_ID
12058
12059 · Type: string
12060
12061 · Default: ""
12062
12063 –drive-service-account-file
12064 Service Account Credentials JSON file path Leave blank normally. Need‐
12065 ed only if you want use SA instead of interactive login.
12066
12067 · Config: service_account_file
12068
12069 · Env Var: RCLONE_DRIVE_SERVICE_ACCOUNT_FILE
12070
12071 · Type: string
12072
12073 · Default: ""
12074
12075 Advanced Options
12076 Here are the advanced options specific to drive (Google Drive).
12077
12078 –drive-service-account-credentials
12079 Service Account Credentials JSON blob Leave blank normally. Needed on‐
12080 ly if you want use SA instead of interactive login.
12081
12082 · Config: service_account_credentials
12083
12084 · Env Var: RCLONE_DRIVE_SERVICE_ACCOUNT_CREDENTIALS
12085
12086 · Type: string
12087
12088 · Default: ""
12089
12090 –drive-team-drive
12091 ID of the Team Drive
12092
12093 · Config: team_drive
12094
12095 · Env Var: RCLONE_DRIVE_TEAM_DRIVE
12096
12097 · Type: string
12098
12099 · Default: ""
12100
12101 –drive-auth-owner-only
12102 Only consider files owned by the authenticated user.
12103
12104 · Config: auth_owner_only
12105
12106 · Env Var: RCLONE_DRIVE_AUTH_OWNER_ONLY
12107
12108 · Type: bool
12109
12110 · Default: false
12111
12112 –drive-use-trash
12113 Send files to the trash instead of deleting permanently. Defaults to
12114 true, namely sending files to the trash. Use --drive-use-trash=false
12115 to delete files permanently instead.
12116
12117 · Config: use_trash
12118
12119 · Env Var: RCLONE_DRIVE_USE_TRASH
12120
12121 · Type: bool
12122
12123 · Default: true
12124
12125 –drive-skip-gdocs
12126 Skip google documents in all listings. If given, gdocs practically be‐
12127 come invisible to rclone.
12128
12129 · Config: skip_gdocs
12130
12131 · Env Var: RCLONE_DRIVE_SKIP_GDOCS
12132
12133 · Type: bool
12134
12135 · Default: false
12136
12137 –drive-skip-checksum-gphotos
12138 Skip MD5 checksum on Google photos and videos only.
12139
12140 Use this if you get checksum errors when transferring Google photos or
12141 videos.
12142
12143 Setting this flag will cause Google photos and videos to return a blank
12144 MD5 checksum.
12145
12146 Google photos are identifed by being in the “photos” space.
12147
12148 Corrupted checksums are caused by Google modifying the image/video but
12149 not updating the checksum.
12150
12151 · Config: skip_checksum_gphotos
12152
12153 · Env Var: RCLONE_DRIVE_SKIP_CHECKSUM_GPHOTOS
12154
12155 · Type: bool
12156
12157 · Default: false
12158
12159 –drive-shared-with-me
12160 Only show files that are shared with me.
12161
12162 Instructs rclone to operate on your “Shared with me” folder (where
12163 Google Drive lets you access the files and folders others have shared
12164 with you).
12165
12166 This works both with the “list” (lsd, lsl, etc) and the “copy” commands
12167 (copy, sync, etc), and with all other commands too.
12168
12169 · Config: shared_with_me
12170
12171 · Env Var: RCLONE_DRIVE_SHARED_WITH_ME
12172
12173 · Type: bool
12174
12175 · Default: false
12176
12177 –drive-trashed-only
12178 Only show files that are in the trash. This will show trashed files in
12179 their original directory structure.
12180
12181 · Config: trashed_only
12182
12183 · Env Var: RCLONE_DRIVE_TRASHED_ONLY
12184
12185 · Type: bool
12186
12187 · Default: false
12188
12189 –drive-formats
12190 Deprecated: see export_formats
12191
12192 · Config: formats
12193
12194 · Env Var: RCLONE_DRIVE_FORMATS
12195
12196 · Type: string
12197
12198 · Default: ""
12199
12200 –drive-export-formats
12201 Comma separated list of preferred formats for downloading Google docs.
12202
12203 · Config: export_formats
12204
12205 · Env Var: RCLONE_DRIVE_EXPORT_FORMATS
12206
12207 · Type: string
12208
12209 · Default: “docx,xlsx,pptx,svg”
12210
12211 –drive-import-formats
12212 Comma separated list of preferred formats for uploading Google docs.
12213
12214 · Config: import_formats
12215
12216 · Env Var: RCLONE_DRIVE_IMPORT_FORMATS
12217
12218 · Type: string
12219
12220 · Default: ""
12221
12222 –drive-allow-import-name-change
12223 Allow the filetype to change when uploading Google docs (e.g. file.doc
12224 to file.docx). This will confuse sync and reupload every time.
12225
12226 · Config: allow_import_name_change
12227
12228 · Env Var: RCLONE_DRIVE_ALLOW_IMPORT_NAME_CHANGE
12229
12230 · Type: bool
12231
12232 · Default: false
12233
12234 –drive-use-created-date
12235 Use file created date instead of modified date.,
12236
12237 Useful when downloading data and you want the creation date used in
12238 place of the last modified date.
12239
12240 WARNING: This flag may have some unexpected consequences.
12241
12242 When uploading to your drive all files will be overwritten unless they
12243 haven't been modified since their creation. And the inverse will occur
12244 while downloading. This side effect can be avoided by using the
12245 “–checksum” flag.
12246
12247 This feature was implemented to retain photos capture date as recorded
12248 by google photos. You will first need to check the “Create a Google
12249 Photos folder” option in your google drive settings. You can then copy
12250 or move the photos locally and use the date the image was taken (creat‐
12251 ed) set as the modification date.
12252
12253 · Config: use_created_date
12254
12255 · Env Var: RCLONE_DRIVE_USE_CREATED_DATE
12256
12257 · Type: bool
12258
12259 · Default: false
12260
12261 –drive-list-chunk
12262 Size of listing chunk 100-1000. 0 to disable.
12263
12264 · Config: list_chunk
12265
12266 · Env Var: RCLONE_DRIVE_LIST_CHUNK
12267
12268 · Type: int
12269
12270 · Default: 1000
12271
12272 –drive-impersonate
12273 Impersonate this user when using a service account.
12274
12275 · Config: impersonate
12276
12277 · Env Var: RCLONE_DRIVE_IMPERSONATE
12278
12279 · Type: string
12280
12281 · Default: ""
12282
12283 –drive-alternate-export
12284 Use alternate export URLs for google documents export.,
12285
12286 If this option is set this instructs rclone to use an alternate set of
12287 export URLs for drive documents. Users have reported that the official
12288 export URLs can't export large documents, whereas these unofficial ones
12289 can.
12290
12291 See rclone issue #2243 (https://github.com/ncw/rclone/issues/2243) for
12292 background, this google drive issue (https://issuetrack‐
12293 er.google.com/issues/36761333) and this helpful post (https://www.lab‐
12294 nol.org/internet/direct-links-for-google-drive/28356/).
12295
12296 · Config: alternate_export
12297
12298 · Env Var: RCLONE_DRIVE_ALTERNATE_EXPORT
12299
12300 · Type: bool
12301
12302 · Default: false
12303
12304 –drive-upload-cutoff
12305 Cutoff for switching to chunked upload
12306
12307 · Config: upload_cutoff
12308
12309 · Env Var: RCLONE_DRIVE_UPLOAD_CUTOFF
12310
12311 · Type: SizeSuffix
12312
12313 · Default: 8M
12314
12315 –drive-chunk-size
12316 Upload chunk size. Must a power of 2 >= 256k.
12317
12318 Making this larger will improve performance, but note that each chunk
12319 is buffered in memory one per transfer.
12320
12321 Reducing this will reduce memory usage but decrease performance.
12322
12323 · Config: chunk_size
12324
12325 · Env Var: RCLONE_DRIVE_CHUNK_SIZE
12326
12327 · Type: SizeSuffix
12328
12329 · Default: 8M
12330
12331 –drive-acknowledge-abuse
12332 Set to allow files which return cannotDownloadAbusiveFile to be down‐
12333 loaded.
12334
12335 If downloading a file returns the error “This file has been identified
12336 as malware or spam and cannot be downloaded” with the error code “can‐
12337 notDownloadAbusiveFile” then supply this flag to rclone to indicate you
12338 acknowledge the risks of downloading the file and rclone will download
12339 it anyway.
12340
12341 · Config: acknowledge_abuse
12342
12343 · Env Var: RCLONE_DRIVE_ACKNOWLEDGE_ABUSE
12344
12345 · Type: bool
12346
12347 · Default: false
12348
12349 –drive-keep-revision-forever
12350 Keep new head revision of each file forever.
12351
12352 · Config: keep_revision_forever
12353
12354 · Env Var: RCLONE_DRIVE_KEEP_REVISION_FOREVER
12355
12356 · Type: bool
12357
12358 · Default: false
12359
12360 –drive-v2-download-min-size
12361 If Object's are greater, use drive v2 API to download.
12362
12363 · Config: v2_download_min_size
12364
12365 · Env Var: RCLONE_DRIVE_V2_DOWNLOAD_MIN_SIZE
12366
12367 · Type: SizeSuffix
12368
12369 · Default: off
12370
12371 –drive-pacer-min-sleep
12372 Minimum time to sleep between API calls.
12373
12374 · Config: pacer_min_sleep
12375
12376 · Env Var: RCLONE_DRIVE_PACER_MIN_SLEEP
12377
12378 · Type: Duration
12379
12380 · Default: 100ms
12381
12382 –drive-pacer-burst
12383 Number of API calls to allow without sleeping.
12384
12385 · Config: pacer_burst
12386
12387 · Env Var: RCLONE_DRIVE_PACER_BURST
12388
12389 · Type: int
12390
12391 · Default: 100
12392
12393 Limitations
12394 Drive has quite a lot of rate limiting. This causes rclone to be lim‐
12395 ited to transferring about 2 files per second only. Individual files
12396 may be transferred much faster at 100s of MBytes/s but lots of small
12397 files can take a long time.
12398
12399 Server side copies are also subject to a separate rate limit. If you
12400 see User rate limit exceeded errors, wait at least 24 hours and retry.
12401 You can disable server side copies with --disable copy to download and
12402 upload the files if you prefer.
12403
12404 Limitations of Google Docs
12405 Google docs will appear as size -1 in rclone ls and as size 0 in any‐
12406 thing which uses the VFS layer, eg rclone mount, rclone serve.
12407
12408 This is because rclone can't find out the size of the Google docs with‐
12409 out downloading them.
12410
12411 Google docs will transfer correctly with rclone sync, rclone copy etc
12412 as rclone knows to ignore the size when doing the transfer.
12413
12414 However an unfortunate consequence of this is that you can't download
12415 Google docs using rclone mount - you will get a 0 sized file. If you
12416 try again the doc may gain its correct size and be downloadable.
12417
12418 Duplicated files
12419 Sometimes, for no reason I've been able to track down, drive will du‐
12420 plicate a file that rclone uploads. Drive unlike all the other remotes
12421 can have duplicated files.
12422
12423 Duplicated files cause problems with the syncing and you will see mes‐
12424 sages in the log about duplicates.
12425
12426 Use rclone dedupe to fix duplicated files.
12427
12428 Note that this isn't just a problem with rclone, even Google Photos on
12429 Android duplicates files on drive sometimes.
12430
12431 Rclone appears to be re-copying files it shouldn't
12432 The most likely cause of this is the duplicated file issue above - run
12433 rclone dedupe and check your logs for duplicate object or directory
12434 messages.
12435
12436 This can also be caused by a delay/caching on google drive's end when
12437 comparing directory listings. Specifically with team drives used in
12438 combination with –fast-list. Files that were uploaded recently may not
12439 appear on the directory list sent to rclone when using –fast-list.
12440
12441 Waiting a moderate period of time between attempts (estimated to be ap‐
12442 proximately 1 hour) and/or not using –fast-list both seem to be effec‐
12443 tive in preventing the problem.
12444
12445 Making your own client_id
12446 When you use rclone with Google drive in its default configuration you
12447 are using rclone's client_id. This is shared between all the rclone
12448 users. There is a global rate limit on the number of queries per sec‐
12449 ond that each client_id can do set by Google. rclone already has a
12450 high quota and I will continue to make sure it is high enough by con‐
12451 tacting Google.
12452
12453 It is strongly recommended to use your own client ID as the default
12454 rclone ID is heavily used. If you have multiple services running, it
12455 is recommended to use an API key for each service. The default Google
12456 quota is 10 transactions per second so it is recommended to stay under
12457 that number as if you use more than that, it will cause rclone to rate
12458 limit and make things slower.
12459
12460 Here is how to create your own Google Drive client ID for rclone:
12461
12462 1. Log into the Google API Console (https://console.develop‐
12463 ers.google.com/) with your Google account. It doesn't matter what
12464 Google account you use. (It need not be the same account as the
12465 Google Drive you want to access)
12466
12467 2. Select a project or create a new project.
12468
12469 3. Under “ENABLE APIS AND SERVICES” search for “Drive”, and enable the
12470 then “Google Drive API”.
12471
12472 4. Click “Credentials” in the left-side panel (not “Create creden‐
12473 tials”, which opens the wizard), then “Create credentials”, then
12474 “OAuth client ID”. It will prompt you to set the OAuth consent
12475 screen product name, if you haven't set one already.
12476
12477 5. Choose an application type of “other”, and click “Create”. (the de‐
12478 fault name is fine)
12479
12480 6. It will show you a client ID and client secret. Use these values in
12481 rclone config to add a new remote or edit an existing remote.
12482
12483 (Thanks to @balazer on github for these instructions.)
12484
12485 HTTP
12486 The HTTP remote is a read only remote for reading files of a webserver.
12487 The webserver should provide file listings which rclone will read and
12488 turn into a remote. This has been tested with common webservers such
12489 as Apache/Nginx/Caddy and will likely work with file listings from most
12490 web servers. (If it doesn't then please file an issue, or send a pull
12491 request!)
12492
12493 Paths are specified as remote: or remote:path/to/dir.
12494
12495 Here is an example of how to make a remote called remote. First run:
12496
12497 rclone config
12498
12499 This will guide you through an interactive setup process:
12500
12501 No remotes found - make a new one
12502 n) New remote
12503 s) Set configuration password
12504 q) Quit config
12505 n/s/q> n
12506 name> remote
12507 Type of storage to configure.
12508 Choose a number from below, or type in your own value
12509 1 / Amazon Drive
12510 \ "amazon cloud drive"
12511 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
12512 \ "s3"
12513 3 / Backblaze B2
12514 \ "b2"
12515 4 / Dropbox
12516 \ "dropbox"
12517 5 / Encrypt/Decrypt a remote
12518 \ "crypt"
12519 6 / FTP Connection
12520 \ "ftp"
12521 7 / Google Cloud Storage (this is not Google Drive)
12522 \ "google cloud storage"
12523 8 / Google Drive
12524 \ "drive"
12525 9 / Hubic
12526 \ "hubic"
12527 10 / Local Disk
12528 \ "local"
12529 11 / Microsoft OneDrive
12530 \ "onedrive"
12531 12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
12532 \ "swift"
12533 13 / SSH/SFTP Connection
12534 \ "sftp"
12535 14 / Yandex Disk
12536 \ "yandex"
12537 15 / http Connection
12538 \ "http"
12539 Storage> http
12540 URL of http host to connect to
12541 Choose a number from below, or type in your own value
12542 1 / Connect to example.com
12543 \ "https://example.com"
12544 url> https://beta.rclone.org
12545 Remote config
12546 --------------------
12547 [remote]
12548 url = https://beta.rclone.org
12549 --------------------
12550 y) Yes this is OK
12551 e) Edit this remote
12552 d) Delete this remote
12553 y/e/d> y
12554 Current remotes:
12555
12556 Name Type
12557 ==== ====
12558 remote http
12559
12560 e) Edit existing remote
12561 n) New remote
12562 d) Delete remote
12563 r) Rename remote
12564 c) Copy remote
12565 s) Set configuration password
12566 q) Quit config
12567 e/n/d/r/c/s/q> q
12568
12569 This remote is called remote and can now be used like this
12570
12571 See all the top level directories
12572
12573 rclone lsd remote:
12574
12575 List the contents of a directory
12576
12577 rclone ls remote:directory
12578
12579 Sync the remote directory to /home/local/directory, deleting any excess
12580 files.
12581
12582 rclone sync remote:directory /home/local/directory
12583
12584 Read only
12585 This remote is read only - you can't upload files to an HTTP server.
12586
12587 Modified time
12588 Most HTTP servers store time accurate to 1 second.
12589
12590 Checksum
12591 No checksums are stored.
12592
12593 Usage without a config file
12594 Since the http remote only has one config parameter it is easy to use
12595 without a config file:
12596
12597 rclone lsd --http-url https://beta.rclone.org :http:
12598
12599 Standard Options
12600 Here are the standard options specific to http (http Connection).
12601
12602 –http-url
12603 URL of http host to connect to
12604
12605 · Config: url
12606
12607 · Env Var: RCLONE_HTTP_URL
12608
12609 · Type: string
12610
12611 · Default: ""
12612
12613 · Examples:
12614
12615 · “https://example.com”
12616
12617 · Connect to example.com
12618
12619 · “https://user:pass@example.com”
12620
12621 · Connect to example.com using a username and password
12622
12623 Advanced Options
12624 Here are the advanced options specific to http (http Connection).
12625
12626 –http-no-slash
12627 Set this if the site doesn't end directories with /
12628
12629 Use this if your target website does not use / on the end of directo‐
12630 ries.
12631
12632 A / on the end of a path is how rclone normally tells the difference
12633 between files and directories. If this flag is set, then rclone will
12634 treat all files with Content-Type: text/html as directories and read
12635 URLs from them rather than downloading them.
12636
12637 Note that this may cause rclone to confuse genuine HTML files with di‐
12638 rectories.
12639
12640 · Config: no_slash
12641
12642 · Env Var: RCLONE_HTTP_NO_SLASH
12643
12644 · Type: bool
12645
12646 · Default: false
12647
12648 Hubic
12649 Paths are specified as remote:path
12650
12651 Paths are specified as remote:container (or remote: for the lsd com‐
12652 mand.) You may put subdirectories in too, eg remote:contain‐
12653 er/path/to/dir.
12654
12655 The initial setup for Hubic involves getting a token from Hubic which
12656 you need to do in your browser. rclone config walks you through it.
12657
12658 Here is an example of how to make a remote called remote. First run:
12659
12660 rclone config
12661
12662 This will guide you through an interactive setup process:
12663
12664 n) New remote
12665 s) Set configuration password
12666 n/s> n
12667 name> remote
12668 Type of storage to configure.
12669 Choose a number from below, or type in your own value
12670 1 / Amazon Drive
12671 \ "amazon cloud drive"
12672 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
12673 \ "s3"
12674 3 / Backblaze B2
12675 \ "b2"
12676 4 / Dropbox
12677 \ "dropbox"
12678 5 / Encrypt/Decrypt a remote
12679 \ "crypt"
12680 6 / Google Cloud Storage (this is not Google Drive)
12681 \ "google cloud storage"
12682 7 / Google Drive
12683 \ "drive"
12684 8 / Hubic
12685 \ "hubic"
12686 9 / Local Disk
12687 \ "local"
12688 10 / Microsoft OneDrive
12689 \ "onedrive"
12690 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
12691 \ "swift"
12692 12 / SSH/SFTP Connection
12693 \ "sftp"
12694 13 / Yandex Disk
12695 \ "yandex"
12696 Storage> 8
12697 Hubic Client Id - leave blank normally.
12698 client_id>
12699 Hubic Client Secret - leave blank normally.
12700 client_secret>
12701 Remote config
12702 Use auto config?
12703 * Say Y if not sure
12704 * Say N if you are working on a remote or headless machine
12705 y) Yes
12706 n) No
12707 y/n> y
12708 If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
12709 Log in and authorize rclone for access
12710 Waiting for code...
12711 Got code
12712 --------------------
12713 [remote]
12714 client_id =
12715 client_secret =
12716 token = {"access_token":"XXXXXX"}
12717 --------------------
12718 y) Yes this is OK
12719 e) Edit this remote
12720 d) Delete this remote
12721 y/e/d> y
12722
12723 See the remote setup docs (https://rclone.org/remote_setup/) for how to
12724 set it up on a machine with no Internet browser available.
12725
12726 Note that rclone runs a webserver on your local machine to collect the
12727 token as returned from Hubic. This only runs from the moment it opens
12728 your browser to the moment you get back the verification code. This is
12729 on http://127.0.0.1:53682/ and this it may require you to unblock it
12730 temporarily if you are running a host firewall.
12731
12732 Once configured you can then use rclone like this,
12733
12734 List containers in the top level of your Hubic
12735
12736 rclone lsd remote:
12737
12738 List all the files in your Hubic
12739
12740 rclone ls remote:
12741
12742 To copy a local directory to an Hubic directory called backup
12743
12744 rclone copy /home/source remote:backup
12745
12746 If you want the directory to be visible in the official Hubic browser,
12747 you need to copy your files to the default directory
12748
12749 rclone copy /home/source remote:default/backup
12750
12751 –fast-list
12752 This remote supports --fast-list which allows you to use fewer transac‐
12753 tions in exchange for more memory. See the rclone docs (/docs/#fast-
12754 list) for more details.
12755
12756 Modified time
12757 The modified time is stored as metadata on the object as X-Ob‐
12758 ject-Meta-Mtime as floating point since the epoch accurate to 1 ns.
12759
12760 This is a de facto standard (used in the official python-swiftclient
12761 amongst others) for storing the modification time for an object.
12762
12763 Note that Hubic wraps the Swift backend, so most of the properties of
12764 are the same.
12765
12766 Standard Options
12767 Here are the standard options specific to hubic (Hubic).
12768
12769 –hubic-client-id
12770 Hubic Client Id Leave blank normally.
12771
12772 · Config: client_id
12773
12774 · Env Var: RCLONE_HUBIC_CLIENT_ID
12775
12776 · Type: string
12777
12778 · Default: ""
12779
12780 –hubic-client-secret
12781 Hubic Client Secret Leave blank normally.
12782
12783 · Config: client_secret
12784
12785 · Env Var: RCLONE_HUBIC_CLIENT_SECRET
12786
12787 · Type: string
12788
12789 · Default: ""
12790
12791 Advanced Options
12792 Here are the advanced options specific to hubic (Hubic).
12793
12794 –hubic-chunk-size
12795 Above this size files will be chunked into a _segments container.
12796
12797 Above this size files will be chunked into a _segments container. The
12798 default for this is 5GB which is its maximum value.
12799
12800 · Config: chunk_size
12801
12802 · Env Var: RCLONE_HUBIC_CHUNK_SIZE
12803
12804 · Type: SizeSuffix
12805
12806 · Default: 5G
12807
12808 –hubic-no-chunk
12809 Don't chunk files during streaming upload.
12810
12811 When doing streaming uploads (eg using rcat or mount) setting this flag
12812 will cause the swift backend to not upload chunked files.
12813
12814 This will limit the maximum upload size to 5GB. However non chunked
12815 files are easier to deal with and have an MD5SUM.
12816
12817 Rclone will still chunk files bigger than chunk_size when doing normal
12818 copy operations.
12819
12820 · Config: no_chunk
12821
12822 · Env Var: RCLONE_HUBIC_NO_CHUNK
12823
12824 · Type: bool
12825
12826 · Default: false
12827
12828 Limitations
12829 This uses the normal OpenStack Swift mechanism to refresh the Swift API
12830 credentials and ignores the expires field returned by the Hubic API.
12831
12832 The Swift API doesn't return a correct MD5SUM for segmented files (Dy‐
12833 namic or Static Large Objects) so rclone won't check or use the MD5SUM
12834 for these.
12835
12836 Jottacloud
12837 Paths are specified as remote:path
12838
12839 Paths may be as deep as required, eg remote:directory/subdirectory.
12840
12841 To configure Jottacloud you will need to enter your username and pass‐
12842 word and select a mountpoint.
12843
12844 Here is an example of how to make a remote called remote. First run:
12845
12846 rclone config
12847
12848 This will guide you through an interactive setup process:
12849
12850 No remotes found - make a new one
12851 n) New remote
12852 s) Set configuration password
12853 q) Quit config
12854 n/s/q> n
12855 name> remote
12856 Type of storage to configure.
12857 Enter a string value. Press Enter for the default ("").
12858 Choose a number from below, or type in your own value
12859 [snip]
12860 13 / JottaCloud
12861 \ "jottacloud"
12862 [snip]
12863 Storage> jottacloud
12864 User Name
12865 Enter a string value. Press Enter for the default ("").
12866 user> user
12867 The mountpoint to use.
12868 Enter a string value. Press Enter for the default ("").
12869 Choose a number from below, or type in your own value
12870 1 / Will be synced by the official client.
12871 \ "Sync"
12872 2 / Archive
12873 \ "Archive"
12874 mountpoint> Archive
12875 Edit advanced config? (y/n)
12876 y) Yes
12877 n) No
12878 y/n> n
12879 Remote config
12880
12881 Do you want to create a machine specific API key?
12882
12883 Rclone has it's own Jottacloud API KEY which works fine as long as one only uses rclone on a single machine. When you want to use rclone with this account on more than one machine it's recommended to create a machine specific API key. These keys can NOT be shared between machines.
12884
12885 y) Yes
12886 n) No
12887 y/n> y
12888 Your Jottacloud password is only required during config and will not be stored.
12889 password:
12890 --------------------
12891 [remote]
12892 type = jottacloud
12893 user = olihey
12894 mountpoint = Archive
12895 client_id = .....
12896 client_secret = ........
12897 token = {........}
12898 --------------------
12899 y) Yes this is OK
12900 e) Edit this remote
12901 d) Delete this remote
12902 y/e/d> y
12903
12904 Once configured you can then use rclone like this,
12905
12906 List directories in top level of your Jottacloud
12907
12908 rclone lsd remote:
12909
12910 List all the files in your Jottacloud
12911
12912 rclone ls remote:
12913
12914 To copy a local directory to an Jottacloud directory called backup
12915
12916 rclone copy /home/source remote:backup
12917
12918 –fast-list
12919 This remote supports --fast-list which allows you to use fewer transac‐
12920 tions in exchange for more memory. See the rclone docs (/docs/#fast-
12921 list) for more details.
12922
12923 Note that the implementation in Jottacloud always uses only a single
12924 API request to get the entire list, so for large folders this could
12925 lead to long wait time before the first results are shown.
12926
12927 Modified time and hashes
12928 Jottacloud allows modification times to be set on objects accurate to 1
12929 second. These will be used to detect whether objects need syncing or
12930 not.
12931
12932 Jottacloud supports MD5 type hashes, so you can use the --checksum
12933 flag.
12934
12935 Note that Jottacloud requires the MD5 hash before upload so if the
12936 source does not have an MD5 checksum then the file will be cached tem‐
12937 porarily on disk (wherever the TMPDIR environment variable points to)
12938 before it is uploaded. Small files will be cached in memory - see the
12939 --jottacloud-md5-memory-limit flag.
12940
12941 Deleting files
12942 By default rclone will send all files to the trash when deleting files.
12943 Due to a lack of API documentation emptying the trash is currently only
12944 possible via the Jottacloud website. If deleting permanently is re‐
12945 quired then use the --jottacloud-hard-delete flag, or set the equiva‐
12946 lent environment variable.
12947
12948 Versions
12949 Jottacloud supports file versioning. When rclone uploads a new version
12950 of a file it creates a new version of it. Currently rclone only sup‐
12951 ports retrieving the current version but older versions can be accessed
12952 via the Jottacloud Website.
12953
12954 Quota information
12955 To view your current quota you can use the rclone about remote: command
12956 which will display your usage limit (unless it is unlimited) and the
12957 current usage.
12958
12959 Device IDs
12960 Jottacloud requires each `device' to be registered. Rclone brings such
12961 a registration to easily access your account but if you want to use
12962 Jottacloud together with rclone on multiple machines you NEED to create
12963 a seperate deviceID/deviceSecrect on each machine. You will asked dur‐
12964 ing setting up the remote. Please be aware that this also means that
12965 copying the rclone config from one machine to another does NOT work
12966 with Jottacloud accounts. You have to create it on each machine.
12967
12968 Standard Options
12969 Here are the standard options specific to jottacloud (JottaCloud).
12970
12971 –jottacloud-user
12972 User Name:
12973
12974 · Config: user
12975
12976 · Env Var: RCLONE_JOTTACLOUD_USER
12977
12978 · Type: string
12979
12980 · Default: ""
12981
12982 –jottacloud-mountpoint
12983 The mountpoint to use.
12984
12985 · Config: mountpoint
12986
12987 · Env Var: RCLONE_JOTTACLOUD_MOUNTPOINT
12988
12989 · Type: string
12990
12991 · Default: ""
12992
12993 · Examples:
12994
12995 · “Sync”
12996
12997 · Will be synced by the official client.
12998
12999 · “Archive”
13000
13001 · Archive
13002
13003 Advanced Options
13004 Here are the advanced options specific to jottacloud (JottaCloud).
13005
13006 –jottacloud-md5-memory-limit
13007 Files bigger than this will be cached on disk to calculate the MD5 if
13008 required.
13009
13010 · Config: md5_memory_limit
13011
13012 · Env Var: RCLONE_JOTTACLOUD_MD5_MEMORY_LIMIT
13013
13014 · Type: SizeSuffix
13015
13016 · Default: 10M
13017
13018 –jottacloud-hard-delete
13019 Delete files permanently rather than putting them into the trash.
13020
13021 · Config: hard_delete
13022
13023 · Env Var: RCLONE_JOTTACLOUD_HARD_DELETE
13024
13025 · Type: bool
13026
13027 · Default: false
13028
13029 –jottacloud-unlink
13030 Remove existing public link to file/folder with link command rather
13031 than creating. Default is false, meaning link command will create or
13032 retrieve public link.
13033
13034 · Config: unlink
13035
13036 · Env Var: RCLONE_JOTTACLOUD_UNLINK
13037
13038 · Type: bool
13039
13040 · Default: false
13041
13042 –jottacloud-upload-resume-limit
13043 Files bigger than this can be resumed if the upload fail's.
13044
13045 · Config: upload_resume_limit
13046
13047 · Env Var: RCLONE_JOTTACLOUD_UPLOAD_RESUME_LIMIT
13048
13049 · Type: SizeSuffix
13050
13051 · Default: 10M
13052
13053 Limitations
13054 Note that Jottacloud is case insensitive so you can't have a file
13055 called “Hello.doc” and one called “hello.doc”.
13056
13057 There are quite a few characters that can't be in Jottacloud file
13058 names. Rclone will map these names to and from an identical looking
13059 unicode equivalent. For example if a file has a ? in it will be
13060 mapped to ? instead.
13061
13062 Jottacloud only supports filenames up to 255 characters in length.
13063
13064 Troubleshooting
13065 Jottacloud exhibits some inconsistent behaviours regarding deleted
13066 files and folders which may cause Copy, Move and DirMove operations to
13067 previously deleted paths to fail. Emptying the trash should help in
13068 such cases.
13069
13070 Koofr
13071 Paths are specified as remote:path
13072
13073 Paths may be as deep as required, eg remote:directory/subdirectory.
13074
13075 The initial setup for Koofr involves creating an application password
13076 for rclone. You can do that by opening the Koofr web application
13077 (https://app.koofr.net/app/admin/preferences/password), giving the
13078 password a nice name like rclone and clicking on generate.
13079
13080 Here is an example of how to make a remote called koofr. First run:
13081
13082 rclone config
13083
13084 This will guide you through an interactive setup process:
13085
13086 No remotes found - make a new one
13087 n) New remote
13088 s) Set configuration password
13089 q) Quit config
13090 n/s/q> n
13091 name> koofr
13092 Type of storage to configure.
13093 Enter a string value. Press Enter for the default ("").
13094 Choose a number from below, or type in your own value
13095 1 / A stackable unification remote, which can appear to merge the contents of several remotes
13096 \ "union"
13097 2 / Alias for a existing remote
13098 \ "alias"
13099 3 / Amazon Drive
13100 \ "amazon cloud drive"
13101 4 / Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc)
13102 \ "s3"
13103 5 / Backblaze B2
13104 \ "b2"
13105 6 / Box
13106 \ "box"
13107 7 / Cache a remote
13108 \ "cache"
13109 8 / Dropbox
13110 \ "dropbox"
13111 9 / Encrypt/Decrypt a remote
13112 \ "crypt"
13113 10 / FTP Connection
13114 \ "ftp"
13115 11 / Google Cloud Storage (this is not Google Drive)
13116 \ "google cloud storage"
13117 12 / Google Drive
13118 \ "drive"
13119 13 / Hubic
13120 \ "hubic"
13121 14 / JottaCloud
13122 \ "jottacloud"
13123 15 / Koofr
13124 \ "koofr"
13125 16 / Local Disk
13126 \ "local"
13127 17 / Mega
13128 \ "mega"
13129 18 / Microsoft Azure Blob Storage
13130 \ "azureblob"
13131 19 / Microsoft OneDrive
13132 \ "onedrive"
13133 20 / OpenDrive
13134 \ "opendrive"
13135 21 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
13136 \ "swift"
13137 22 / Pcloud
13138 \ "pcloud"
13139 23 / QingCloud Object Storage
13140 \ "qingstor"
13141 24 / SSH/SFTP Connection
13142 \ "sftp"
13143 25 / Webdav
13144 \ "webdav"
13145 26 / Yandex Disk
13146 \ "yandex"
13147 27 / http Connection
13148 \ "http"
13149 Storage> koofr
13150 ** See help for koofr backend at: https://rclone.org/koofr/ **
13151
13152 Your Koofr user name
13153 Enter a string value. Press Enter for the default ("").
13154 user> USER@NAME
13155 Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
13156 y) Yes type in my own password
13157 g) Generate random password
13158 y/g> y
13159 Enter the password:
13160 password:
13161 Confirm the password:
13162 password:
13163 Edit advanced config? (y/n)
13164 y) Yes
13165 n) No
13166 y/n> n
13167 Remote config
13168 --------------------
13169 [koofr]
13170 type = koofr
13171 baseurl = https://app.koofr.net
13172 user = USER@NAME
13173 password = *** ENCRYPTED ***
13174 --------------------
13175 y) Yes this is OK
13176 e) Edit this remote
13177 d) Delete this remote
13178 y/e/d> y
13179
13180 You can choose to edit advanced config in order to enter your own ser‐
13181 vice URL if you use an on-premise or white label Koofr instance, or
13182 choose an alternative mount instead of your primary storage.
13183
13184 Once configured you can then use rclone like this,
13185
13186 List directories in top level of your Koofr
13187
13188 rclone lsd koofr:
13189
13190 List all the files in your Koofr
13191
13192 rclone ls koofr:
13193
13194 To copy a local directory to an Koofr directory called backup
13195
13196 rclone copy /home/source remote:backup
13197
13198 Standard Options
13199 Here are the standard options specific to koofr (Koofr).
13200
13201 –koofr-user
13202 Your Koofr user name
13203
13204 · Config: user
13205
13206 · Env Var: RCLONE_KOOFR_USER
13207
13208 · Type: string
13209
13210 · Default: ""
13211
13212 –koofr-password
13213 Your Koofr password for rclone (generate one at
13214 https://app.koofr.net/app/admin/preferences/password)
13215
13216 · Config: password
13217
13218 · Env Var: RCLONE_KOOFR_PASSWORD
13219
13220 · Type: string
13221
13222 · Default: ""
13223
13224 Advanced Options
13225 Here are the advanced options specific to koofr (Koofr).
13226
13227 –koofr-endpoint
13228 The Koofr API endpoint to use
13229
13230 · Config: endpoint
13231
13232 · Env Var: RCLONE_KOOFR_ENDPOINT
13233
13234 · Type: string
13235
13236 · Default: “https://app.koofr.net”
13237
13238 –koofr-mountid
13239 Mount ID of the mount to use. If omitted, the primary mount is used.
13240
13241 · Config: mountid
13242
13243 · Env Var: RCLONE_KOOFR_MOUNTID
13244
13245 · Type: string
13246
13247 · Default: ""
13248
13249 Limitations
13250 Note that Koofr is case insensitive so you can't have a file called
13251 “Hello.doc” and one called “hello.doc”.
13252
13253 Mega
13254 Mega (https://mega.nz/) is a cloud storage and file hosting service
13255 known for its security feature where all files are encrypted locally
13256 before they are uploaded. This prevents anyone (including employees of
13257 Mega) from accessing the files without knowledge of the key used for
13258 encryption.
13259
13260 This is an rclone backend for Mega which supports the file transfer
13261 features of Mega using the same client side encryption.
13262
13263 Paths are specified as remote:path
13264
13265 Paths may be as deep as required, eg remote:directory/subdirectory.
13266
13267 Here is an example of how to make a remote called remote. First run:
13268
13269 rclone config
13270
13271 This will guide you through an interactive setup process:
13272
13273 No remotes found - make a new one
13274 n) New remote
13275 s) Set configuration password
13276 q) Quit config
13277 n/s/q> n
13278 name> remote
13279 Type of storage to configure.
13280 Choose a number from below, or type in your own value
13281 1 / Alias for a existing remote
13282 \ "alias"
13283 [snip]
13284 14 / Mega
13285 \ "mega"
13286 [snip]
13287 23 / http Connection
13288 \ "http"
13289 Storage> mega
13290 User name
13291 user> you@example.com
13292 Password.
13293 y) Yes type in my own password
13294 g) Generate random password
13295 n) No leave this optional password blank
13296 y/g/n> y
13297 Enter the password:
13298 password:
13299 Confirm the password:
13300 password:
13301 Remote config
13302 --------------------
13303 [remote]
13304 type = mega
13305 user = you@example.com
13306 pass = *** ENCRYPTED ***
13307 --------------------
13308 y) Yes this is OK
13309 e) Edit this remote
13310 d) Delete this remote
13311 y/e/d> y
13312
13313 NOTE: The encryption keys need to have been already generated after a
13314 regular login via the browser, otherwise attempting to use the creden‐
13315 tials in rclone will fail.
13316
13317 Once configured you can then use rclone like this,
13318
13319 List directories in top level of your Mega
13320
13321 rclone lsd remote:
13322
13323 List all the files in your Mega
13324
13325 rclone ls remote:
13326
13327 To copy a local directory to an Mega directory called backup
13328
13329 rclone copy /home/source remote:backup
13330
13331 Modified time and hashes
13332 Mega does not support modification times or hashes yet.
13333
13334 Duplicated files
13335 Mega can have two files with exactly the same name and path (unlike a
13336 normal file system).
13337
13338 Duplicated files cause problems with the syncing and you will see mes‐
13339 sages in the log about duplicates.
13340
13341 Use rclone dedupe to fix duplicated files.
13342
13343 Standard Options
13344 Here are the standard options specific to mega (Mega).
13345
13346 –mega-user
13347 User name
13348
13349 · Config: user
13350
13351 · Env Var: RCLONE_MEGA_USER
13352
13353 · Type: string
13354
13355 · Default: ""
13356
13357 –mega-pass
13358 Password.
13359
13360 · Config: pass
13361
13362 · Env Var: RCLONE_MEGA_PASS
13363
13364 · Type: string
13365
13366 · Default: ""
13367
13368 Advanced Options
13369 Here are the advanced options specific to mega (Mega).
13370
13371 –mega-debug
13372 Output more debug from Mega.
13373
13374 If this flag is set (along with -vv) it will print further debugging
13375 information from the mega backend.
13376
13377 · Config: debug
13378
13379 · Env Var: RCLONE_MEGA_DEBUG
13380
13381 · Type: bool
13382
13383 · Default: false
13384
13385 –mega-hard-delete
13386 Delete files permanently rather than putting them into the trash.
13387
13388 Normally the mega backend will put all deletions into the trash rather
13389 than permanently deleting them. If you specify this then rclone will
13390 permanently delete objects instead.
13391
13392 · Config: hard_delete
13393
13394 · Env Var: RCLONE_MEGA_HARD_DELETE
13395
13396 · Type: bool
13397
13398 · Default: false
13399
13400 Limitations
13401 This backend uses the go-mega go library
13402 (https://github.com/t3rm1n4l/go-mega) which is an opensource go library
13403 implementing the Mega API. There doesn't appear to be any documenta‐
13404 tion for the mega protocol beyond the mega C++ SDK
13405 (https://github.com/meganz/sdk) source code so there are likely quite a
13406 few errors still remaining in this library.
13407
13408 Mega allows duplicate files which may confuse rclone.
13409
13410 Microsoft Azure Blob Storage
13411 Paths are specified as remote:container (or remote: for the lsd com‐
13412 mand.) You may put subdirectories in too, eg remote:contain‐
13413 er/path/to/dir.
13414
13415 Here is an example of making a Microsoft Azure Blob Storage configura‐
13416 tion. For a remote called remote. First run:
13417
13418 rclone config
13419
13420 This will guide you through an interactive setup process:
13421
13422 No remotes found - make a new one
13423 n) New remote
13424 s) Set configuration password
13425 q) Quit config
13426 n/s/q> n
13427 name> remote
13428 Type of storage to configure.
13429 Choose a number from below, or type in your own value
13430 1 / Amazon Drive
13431 \ "amazon cloud drive"
13432 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
13433 \ "s3"
13434 3 / Backblaze B2
13435 \ "b2"
13436 4 / Box
13437 \ "box"
13438 5 / Dropbox
13439 \ "dropbox"
13440 6 / Encrypt/Decrypt a remote
13441 \ "crypt"
13442 7 / FTP Connection
13443 \ "ftp"
13444 8 / Google Cloud Storage (this is not Google Drive)
13445 \ "google cloud storage"
13446 9 / Google Drive
13447 \ "drive"
13448 10 / Hubic
13449 \ "hubic"
13450 11 / Local Disk
13451 \ "local"
13452 12 / Microsoft Azure Blob Storage
13453 \ "azureblob"
13454 13 / Microsoft OneDrive
13455 \ "onedrive"
13456 14 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
13457 \ "swift"
13458 15 / SSH/SFTP Connection
13459 \ "sftp"
13460 16 / Yandex Disk
13461 \ "yandex"
13462 17 / http Connection
13463 \ "http"
13464 Storage> azureblob
13465 Storage Account Name
13466 account> account_name
13467 Storage Account Key
13468 key> base64encodedkey==
13469 Endpoint for the service - leave blank normally.
13470 endpoint>
13471 Remote config
13472 --------------------
13473 [remote]
13474 account = account_name
13475 key = base64encodedkey==
13476 endpoint =
13477 --------------------
13478 y) Yes this is OK
13479 e) Edit this remote
13480 d) Delete this remote
13481 y/e/d> y
13482
13483 See all containers
13484
13485 rclone lsd remote:
13486
13487 Make a new container
13488
13489 rclone mkdir remote:container
13490
13491 List the contents of a container
13492
13493 rclone ls remote:container
13494
13495 Sync /home/local/directory to the remote container, deleting any excess
13496 files in the container.
13497
13498 rclone sync /home/local/directory remote:container
13499
13500 –fast-list
13501 This remote supports --fast-list which allows you to use fewer transac‐
13502 tions in exchange for more memory. See the rclone docs (/docs/#fast-
13503 list) for more details.
13504
13505 Modified time
13506 The modified time is stored as metadata on the object with the mtime
13507 key. It is stored using RFC3339 Format time with nanosecond precision.
13508 The metadata is supplied during directory listings so there is no over‐
13509 head to using it.
13510
13511 Hashes
13512 MD5 hashes are stored with blobs. However blobs that were uploaded in
13513 chunks only have an MD5 if the source remote was capable of MD5 hashes,
13514 eg the local disk.
13515
13516 Authenticating with Azure Blob Storage
13517 Rclone has 3 ways of authenticating with Azure Blob Storage:
13518
13519 Account and Key
13520 This is the most straight forward and least flexible way. Just fill in
13521 the account and key lines and leave the rest blank.
13522
13523 SAS URL
13524 This can be an account level SAS URL or container level SAS URL
13525
13526 To use it leave account, key blank and fill in sas_url.
13527
13528 Account level SAS URL or container level SAS URL can be obtained from
13529 Azure portal or Azure Storage Explorer. To get a container level SAS
13530 URL right click on a container in the Azure Blob explorer in the Azure
13531 portal.
13532
13533 If You use container level SAS URL, rclone operations are permitted on‐
13534 ly on particular container, eg
13535
13536 rclone ls azureblob:container or rclone ls azureblob:
13537
13538 Since container name already exists in SAS URL, you can leave it empty
13539 as well.
13540
13541 However these will not work
13542
13543 rclone lsd azureblob:
13544 rclone ls azureblob:othercontainer
13545
13546 This would be useful for temporarily allowing third parties access to a
13547 single container or putting credentials into an untrusted environment.
13548
13549 Multipart uploads
13550 Rclone supports multipart uploads with Azure Blob storage. Files big‐
13551 ger than 256MB will be uploaded using chunked upload by default.
13552
13553 The files will be uploaded in parallel in 4MB chunks (by default).
13554 Note that these chunks are buffered in memory and there may be up to
13555 --transfers of them being uploaded at once.
13556
13557 Files can't be split into more than 50,000 chunks so by default, so the
13558 largest file that can be uploaded with 4MB chunk size is 195GB. Above
13559 this rclone will double the chunk size until it creates less than
13560 50,000 chunks. By default this will mean a maximum file size of 3.2TB
13561 can be uploaded. This can be raised to 5TB using --azure‐
13562 blob-chunk-size 100M.
13563
13564 Note that rclone doesn't commit the block list until the end of the up‐
13565 load which means that there is a limit of 9.5TB of multipart uploads in
13566 progress as Azure won't allow more than that amount of uncommitted
13567 blocks.
13568
13569 Standard Options
13570 Here are the standard options specific to azureblob (Microsoft Azure
13571 Blob Storage).
13572
13573 –azureblob-account
13574 Storage Account Name (leave blank to use connection string or SAS URL)
13575
13576 · Config: account
13577
13578 · Env Var: RCLONE_AZUREBLOB_ACCOUNT
13579
13580 · Type: string
13581
13582 · Default: ""
13583
13584 –azureblob-key
13585 Storage Account Key (leave blank to use connection string or SAS URL)
13586
13587 · Config: key
13588
13589 · Env Var: RCLONE_AZUREBLOB_KEY
13590
13591 · Type: string
13592
13593 · Default: ""
13594
13595 –azureblob-sas-url
13596 SAS URL for container level access only (leave blank if using ac‐
13597 count/key or connection string)
13598
13599 · Config: sas_url
13600
13601 · Env Var: RCLONE_AZUREBLOB_SAS_URL
13602
13603 · Type: string
13604
13605 · Default: ""
13606
13607 Advanced Options
13608 Here are the advanced options specific to azureblob (Microsoft Azure
13609 Blob Storage).
13610
13611 –azureblob-endpoint
13612 Endpoint for the service Leave blank normally.
13613
13614 · Config: endpoint
13615
13616 · Env Var: RCLONE_AZUREBLOB_ENDPOINT
13617
13618 · Type: string
13619
13620 · Default: ""
13621
13622 –azureblob-upload-cutoff
13623 Cutoff for switching to chunked upload (<= 256MB).
13624
13625 · Config: upload_cutoff
13626
13627 · Env Var: RCLONE_AZUREBLOB_UPLOAD_CUTOFF
13628
13629 · Type: SizeSuffix
13630
13631 · Default: 256M
13632
13633 –azureblob-chunk-size
13634 Upload chunk size (<= 100MB).
13635
13636 Note that this is stored in memory and there may be up to “–transfers”
13637 chunks stored at once in memory.
13638
13639 · Config: chunk_size
13640
13641 · Env Var: RCLONE_AZUREBLOB_CHUNK_SIZE
13642
13643 · Type: SizeSuffix
13644
13645 · Default: 4M
13646
13647 –azureblob-list-chunk
13648 Size of blob list.
13649
13650 This sets the number of blobs requested in each listing chunk. Default
13651 is the maximum, 5000. “List blobs” requests are permitted 2 minutes
13652 per megabyte to complete. If an operation is taking longer than 2 min‐
13653 utes per megabyte on average, it will time out ( source
13654 (https://docs.microsoft.com/en-us/rest/api/storageservices/setting-
13655 timeouts-for-blob-service-operations#exceptions-to-default-timeout-in‐
13656 terval) ). This can be used to limit the number of blobs items to re‐
13657 turn, to avoid the time out.
13658
13659 · Config: list_chunk
13660
13661 · Env Var: RCLONE_AZUREBLOB_LIST_CHUNK
13662
13663 · Type: int
13664
13665 · Default: 5000
13666
13667 –azureblob-access-tier
13668 Access tier of blob: hot, cool or archive.
13669
13670 Archived blobs can be restored by setting access tier to hot or cool.
13671 Leave blank if you intend to use default access tier, which is set at
13672 account level
13673
13674 If there is no “access tier” specified, rclone doesn't apply any tier.
13675 rclone performs “Set Tier” operation on blobs while uploading, if ob‐
13676 jects are not modified, specifying “access tier” to new one will have
13677 no effect. If blobs are in “archive tier” at remote, trying to perform
13678 data transfer operations from remote will not be allowed. User should
13679 first restore by tiering blob to “Hot” or “Cool”.
13680
13681 · Config: access_tier
13682
13683 · Env Var: RCLONE_AZUREBLOB_ACCESS_TIER
13684
13685 · Type: string
13686
13687 · Default: ""
13688
13689 Limitations
13690 MD5 sums are only uploaded with chunked files if the source has an MD5
13691 sum. This will always be the case for a local to azure copy.
13692
13693 Microsoft OneDrive
13694 Paths are specified as remote:path
13695
13696 Paths may be as deep as required, eg remote:directory/subdirectory.
13697
13698 The initial setup for OneDrive involves getting a token from Microsoft
13699 which you need to do in your browser. rclone config walks you through
13700 it.
13701
13702 Here is an example of how to make a remote called remote. First run:
13703
13704 rclone config
13705
13706 This will guide you through an interactive setup process:
13707
13708 e) Edit existing remote
13709 n) New remote
13710 d) Delete remote
13711 r) Rename remote
13712 c) Copy remote
13713 s) Set configuration password
13714 q) Quit config
13715 e/n/d/r/c/s/q> n
13716 name> remote
13717 Type of storage to configure.
13718 Enter a string value. Press Enter for the default ("").
13719 Choose a number from below, or type in your own value
13720 ...
13721 17 / Microsoft OneDrive
13722 \ "onedrive"
13723 ...
13724 Storage> 17
13725 Microsoft App Client Id
13726 Leave blank normally.
13727 Enter a string value. Press Enter for the default ("").
13728 client_id>
13729 Microsoft App Client Secret
13730 Leave blank normally.
13731 Enter a string value. Press Enter for the default ("").
13732 client_secret>
13733 Edit advanced config? (y/n)
13734 y) Yes
13735 n) No
13736 y/n> n
13737 Remote config
13738 Use auto config?
13739 * Say Y if not sure
13740 * Say N if you are working on a remote or headless machine
13741 y) Yes
13742 n) No
13743 y/n> y
13744 If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
13745 Log in and authorize rclone for access
13746 Waiting for code...
13747 Got code
13748 Choose a number from below, or type in an existing value
13749 1 / OneDrive Personal or Business
13750 \ "onedrive"
13751 2 / Sharepoint site
13752 \ "sharepoint"
13753 3 / Type in driveID
13754 \ "driveid"
13755 4 / Type in SiteID
13756 \ "siteid"
13757 5 / Search a Sharepoint site
13758 \ "search"
13759 Your choice> 1
13760 Found 1 drives, please select the one you want to use:
13761 0: OneDrive (business) id=b!Eqwertyuiopasdfghjklzxcvbnm-7mnbvcxzlkjhgfdsapoiuytrewqk
13762 Chose drive to use:> 0
13763 Found drive 'root' of type 'business', URL: https://org-my.sharepoint.com/personal/you/Documents
13764 Is that okay?
13765 y) Yes
13766 n) No
13767 y/n> y
13768 --------------------
13769 [remote]
13770 type = onedrive
13771 token = {"access_token":"youraccesstoken","token_type":"Bearer","refresh_token":"yourrefreshtoken","expiry":"2018-08-26T22:39:52.486512262+08:00"}
13772 drive_id = b!Eqwertyuiopasdfghjklzxcvbnm-7mnbvcxzlkjhgfdsapoiuytrewqk
13773 drive_type = business
13774 --------------------
13775 y) Yes this is OK
13776 e) Edit this remote
13777 d) Delete this remote
13778 y/e/d> y
13779
13780 See the remote setup docs (https://rclone.org/remote_setup/) for how to
13781 set it up on a machine with no Internet browser available.
13782
13783 Note that rclone runs a webserver on your local machine to collect the
13784 token as returned from Microsoft. This only runs from the moment it
13785 opens your browser to the moment you get back the verification code.
13786 This is on http://127.0.0.1:53682/ and this it may require you to un‐
13787 block it temporarily if you are running a host firewall.
13788
13789 Once configured you can then use rclone like this,
13790
13791 List directories in top level of your OneDrive
13792
13793 rclone lsd remote:
13794
13795 List all the files in your OneDrive
13796
13797 rclone ls remote:
13798
13799 To copy a local directory to an OneDrive directory called backup
13800
13801 rclone copy /home/source remote:backup
13802
13803 Getting your own Client ID and Key
13804 rclone uses a pair of Client ID and Key shared by all rclone users when
13805 performing requests by default. If you are having problems with them
13806 (E.g., seeing a lot of throttling), you can get your own Client ID and
13807 Key by following the steps below:
13808
13809 1. Open https://apps.dev.microsoft.com/#/appList, then click Add an app
13810 (Choose Converged applications if applicable)
13811
13812 2. Enter a name for your app, and click continue. Copy and keep the
13813 Application Id under the app name for later use.
13814
13815 3. Under section Application Secrets, click Generate New Password.
13816 Copy and keep that password for later use.
13817
13818 4. Under section Platforms, click Add platform, then Web. Enter
13819 http://localhost:53682/ in Redirect URLs.
13820
13821 5. Under section Microsoft Graph Permissions, Add these delegated per‐
13822 missions: Files.Read, Files.ReadWrite, Files.Read.All, Files.Read‐
13823 Write.All, offline_access, User.Read.
13824
13825 6. Scroll to the bottom and click Save.
13826
13827 Now the application is complete. Run rclone config to create or edit a
13828 OneDrive remote. Supply the app ID and password as Client ID and Se‐
13829 cret, respectively. rclone will walk you through the remaining steps.
13830
13831 Modified time and hashes
13832 OneDrive allows modification times to be set on objects accurate to 1
13833 second. These will be used to detect whether objects need syncing or
13834 not.
13835
13836 OneDrive personal supports SHA1 type hashes. OneDrive for business and
13837 Sharepoint Server support QuickXorHash (https://docs.microsoft.com/en-
13838 us/onedrive/developer/code-snippets/quickxorhash).
13839
13840 For all types of OneDrive you can use the --checksum flag.
13841
13842 Deleting files
13843 Any files you delete with rclone will end up in the trash. Microsoft
13844 doesn't provide an API to permanently delete files, nor to empty the
13845 trash, so you will have to do that with one of Microsoft's apps or via
13846 the OneDrive website.
13847
13848 Standard Options
13849 Here are the standard options specific to onedrive (Microsoft
13850 OneDrive).
13851
13852 –onedrive-client-id
13853 Microsoft App Client Id Leave blank normally.
13854
13855 · Config: client_id
13856
13857 · Env Var: RCLONE_ONEDRIVE_CLIENT_ID
13858
13859 · Type: string
13860
13861 · Default: ""
13862
13863 –onedrive-client-secret
13864 Microsoft App Client Secret Leave blank normally.
13865
13866 · Config: client_secret
13867
13868 · Env Var: RCLONE_ONEDRIVE_CLIENT_SECRET
13869
13870 · Type: string
13871
13872 · Default: ""
13873
13874 Advanced Options
13875 Here are the advanced options specific to onedrive (Microsoft
13876 OneDrive).
13877
13878 –onedrive-chunk-size
13879 Chunk size to upload files with - must be multiple of 320k.
13880
13881 Above this size files will be chunked - must be multiple of 320k. Note
13882 that the chunks will be buffered into memory.
13883
13884 · Config: chunk_size
13885
13886 · Env Var: RCLONE_ONEDRIVE_CHUNK_SIZE
13887
13888 · Type: SizeSuffix
13889
13890 · Default: 10M
13891
13892 –onedrive-drive-id
13893 The ID of the drive to use
13894
13895 · Config: drive_id
13896
13897 · Env Var: RCLONE_ONEDRIVE_DRIVE_ID
13898
13899 · Type: string
13900
13901 · Default: ""
13902
13903 –onedrive-drive-type
13904 The type of the drive ( personal | business | documentLibrary )
13905
13906 · Config: drive_type
13907
13908 · Env Var: RCLONE_ONEDRIVE_DRIVE_TYPE
13909
13910 · Type: string
13911
13912 · Default: ""
13913
13914 –onedrive-expose-onenote-files
13915 Set to make OneNote files show up in directory listings.
13916
13917 By default rclone will hide OneNote files in directory listings because
13918 operations like “Open” and “Update” won't work on them. But this be‐
13919 haviour may also prevent you from deleting them. If you want to delete
13920 OneNote files or otherwise want them to show up in directory listing,
13921 set this option.
13922
13923 · Config: expose_onenote_files
13924
13925 · Env Var: RCLONE_ONEDRIVE_EXPOSE_ONENOTE_FILES
13926
13927 · Type: bool
13928
13929 · Default: false
13930
13931 Limitations
13932 Note that OneDrive is case insensitive so you can't have a file called
13933 “Hello.doc” and one called “hello.doc”.
13934
13935 There are quite a few characters that can't be in OneDrive file names.
13936 These can't occur on Windows platforms, but on non-Windows platforms
13937 they are common. Rclone will map these names to and from an identical
13938 looking unicode equivalent. For example if a file has a ? in it will
13939 be mapped to ? instead.
13940
13941 The largest allowed file sizes are 15GB for OneDrive for Business and
13942 35GB for OneDrive Personal (Updated 4 Jan 2019).
13943
13944 The entire path, including the file name, must contain fewer than 400
13945 characters for OneDrive, OneDrive for Business and SharePoint Online.
13946 If you are encrypting file and folder names with rclone, you may want
13947 to pay attention to this limitation because the encrypted names are
13948 typically longer than the original ones.
13949
13950 OneDrive seems to be OK with at least 50,000 files in a folder, but at
13951 100,000 rclone will get errors listing the directory like
13952 couldn't list files: UnknownError:. See #2707
13953 (https://github.com/ncw/rclone/issues/2707) for more info.
13954
13955 An official document about the limitations for different types of
13956 OneDrive can be found here (https://support.office.com/en-us/arti‐
13957 cle/invalid-file-names-and-file-types-in-onedrive-onedrive-for-busi‐
13958 ness-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa).
13959
13960 Versioning issue
13961 Every change in OneDrive causes the service to create a new version.
13962 This counts against a users quota. For example changing the modifica‐
13963 tion time of a file creates a second version, so the file is using
13964 twice the space.
13965
13966 The copy is the only rclone command affected by this as we copy the
13967 file and then afterwards set the modification time to match the source
13968 file.
13969
13970 Note: Starting October 2018, users will no longer be able to disable
13971 versioning by default. This is because Microsoft has brought an update
13972 (https://techcommunity.microsoft.com/t5/Microsoft-OneDrive-Blog/New-Up‐
13973 dates-to-OneDrive-and-SharePoint-Team-Site-Versioning/ba-p/204390) to
13974 the mechanism. To change this new default setting, a PowerShell com‐
13975 mand is required to be run by a SharePoint admin. If you are an admin,
13976 you can run these commands in PowerShell to change that setting:
13977
13978 1. Install-Module -Name Microsoft.Online.SharePoint.PowerShell (in case
13979 you haven't installed this already)
13980
13981 2. Import-Module Microsoft.Online.SharePoint.PowerShell -Disable‐
13982 NameChecking
13983
13984 3. Connect-SPOService -Url https://YOURSITE-admin.sharepoint.com -Cre‐
13985 dential YOU@YOURSITE.COM (replacing YOURSITE, YOU, YOURSITE.COM with
13986 the actual values; this will prompt for your credentials)
13987
13988 4. Set-SPOTenant -EnableMinimumVersionRequirement $False
13989
13990 5. Disconnect-SPOService (to disconnect from the server)
13991
13992 Below are the steps for normal users to disable versioning. If you
13993 don't see the “No Versioning” option, make sure the above requirements
13994 are met.
13995
13996 User Weropol (https://github.com/Weropol) has found a method to disable
13997 versioning on OneDrive
13998
13999 1. Open the settings menu by clicking on the gear symbol at the top of
14000 the OneDrive Business page.
14001
14002 2. Click Site settings.
14003
14004 3. Once on the Site settings page, navigate to Site Administration >
14005 Site libraries and lists.
14006
14007 4. Click Customize “Documents”.
14008
14009 5. Click General Settings > Versioning Settings.
14010
14011 6. Under Document Version History select the option No versioning.
14012 Note: This will disable the creation of new file versions, but will
14013 not remove any previous versions. Your documents are safe.
14014
14015 7. Apply the changes by clicking OK.
14016
14017 8. Use rclone to upload or modify files. (I also use the –no-up‐
14018 date-modtime flag)
14019
14020 9. Restore the versioning settings after using rclone. (Optional)
14021
14022 Troubleshooting
14023 Error: access_denied
14024 Code: AADSTS65005
14025 Description: Using application 'rclone' is currently not supported for your organization [YOUR_ORGANIZATION] because it is in an unmanaged state. An administrator needs to claim ownership of the company by DNS validation of [YOUR_ORGANIZATION] before the application rclone can be provisioned.
14026
14027 This means that rclone can't use the OneDrive for Business API with
14028 your account. You can't do much about it, maybe write an email to your
14029 admins.
14030
14031 However, there are other ways to interact with your OneDrive account.
14032 Have a look at the webdav backend: https://rclone.org/webdav/#share‐
14033 point
14034
14035 Error: invalid_grant
14036 Code: AADSTS50076
14037 Description: Due to a configuration change made by your administrator, or because you moved to a new location, you must use multi-factor authentication to access '...'.
14038
14039 If you see the error above after enabling multi-factor authentication
14040 for your account, you can fix it by refreshing your OAuth refresh to‐
14041 ken. To do that, run rclone config, and choose to edit your OneDrive
14042 backend. Then, you don't need to actually make any changes until you
14043 reach this question: Already have a token - refresh?. For this ques‐
14044 tion, answer y and go through the process to refresh your token, just
14045 like the first time the backend is configured. After this, rclone
14046 should work again for this backend.
14047
14048 OpenDrive
14049 Paths are specified as remote:path
14050
14051 Paths may be as deep as required, eg remote:directory/subdirectory.
14052
14053 Here is an example of how to make a remote called remote. First run:
14054
14055 rclone config
14056
14057 This will guide you through an interactive setup process:
14058
14059 n) New remote
14060 d) Delete remote
14061 q) Quit config
14062 e/n/d/q> n
14063 name> remote
14064 Type of storage to configure.
14065 Choose a number from below, or type in your own value
14066 1 / Amazon Drive
14067 \ "amazon cloud drive"
14068 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
14069 \ "s3"
14070 3 / Backblaze B2
14071 \ "b2"
14072 4 / Dropbox
14073 \ "dropbox"
14074 5 / Encrypt/Decrypt a remote
14075 \ "crypt"
14076 6 / Google Cloud Storage (this is not Google Drive)
14077 \ "google cloud storage"
14078 7 / Google Drive
14079 \ "drive"
14080 8 / Hubic
14081 \ "hubic"
14082 9 / Local Disk
14083 \ "local"
14084 10 / OpenDrive
14085 \ "opendrive"
14086 11 / Microsoft OneDrive
14087 \ "onedrive"
14088 12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
14089 \ "swift"
14090 13 / SSH/SFTP Connection
14091 \ "sftp"
14092 14 / Yandex Disk
14093 \ "yandex"
14094 Storage> 10
14095 Username
14096 username>
14097 Password
14098 y) Yes type in my own password
14099 g) Generate random password
14100 y/g> y
14101 Enter the password:
14102 password:
14103 Confirm the password:
14104 password:
14105 --------------------
14106 [remote]
14107 username =
14108 password = *** ENCRYPTED ***
14109 --------------------
14110 y) Yes this is OK
14111 e) Edit this remote
14112 d) Delete this remote
14113 y/e/d> y
14114
14115 List directories in top level of your OpenDrive
14116
14117 rclone lsd remote:
14118
14119 List all the files in your OpenDrive
14120
14121 rclone ls remote:
14122
14123 To copy a local directory to an OpenDrive directory called backup
14124
14125 rclone copy /home/source remote:backup
14126
14127 Modified time and MD5SUMs
14128 OpenDrive allows modification times to be set on objects accurate to 1
14129 second. These will be used to detect whether objects need syncing or
14130 not.
14131
14132 Standard Options
14133 Here are the standard options specific to opendrive (OpenDrive).
14134
14135 –opendrive-username
14136 Username
14137
14138 · Config: username
14139
14140 · Env Var: RCLONE_OPENDRIVE_USERNAME
14141
14142 · Type: string
14143
14144 · Default: ""
14145
14146 –opendrive-password
14147 Password.
14148
14149 · Config: password
14150
14151 · Env Var: RCLONE_OPENDRIVE_PASSWORD
14152
14153 · Type: string
14154
14155 · Default: ""
14156
14157 Limitations
14158 Note that OpenDrive is case insensitive so you can't have a file called
14159 “Hello.doc” and one called “hello.doc”.
14160
14161 There are quite a few characters that can't be in OpenDrive file names.
14162 These can't occur on Windows platforms, but on non-Windows platforms
14163 they are common. Rclone will map these names to and from an identical
14164 looking unicode equivalent. For example if a file has a ? in it will
14165 be mapped to ? instead.
14166
14167 QingStor
14168 Paths are specified as remote:bucket (or remote: for the lsd command.)
14169 You may put subdirectories in too, eg remote:bucket/path/to/dir.
14170
14171 Here is an example of making an QingStor configuration. First run
14172
14173 rclone config
14174
14175 This will guide you through an interactive setup process.
14176
14177 No remotes found - make a new one
14178 n) New remote
14179 r) Rename remote
14180 c) Copy remote
14181 s) Set configuration password
14182 q) Quit config
14183 n/r/c/s/q> n
14184 name> remote
14185 Type of storage to configure.
14186 Choose a number from below, or type in your own value
14187 1 / Amazon Drive
14188 \ "amazon cloud drive"
14189 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
14190 \ "s3"
14191 3 / Backblaze B2
14192 \ "b2"
14193 4 / Dropbox
14194 \ "dropbox"
14195 5 / Encrypt/Decrypt a remote
14196 \ "crypt"
14197 6 / FTP Connection
14198 \ "ftp"
14199 7 / Google Cloud Storage (this is not Google Drive)
14200 \ "google cloud storage"
14201 8 / Google Drive
14202 \ "drive"
14203 9 / Hubic
14204 \ "hubic"
14205 10 / Local Disk
14206 \ "local"
14207 11 / Microsoft OneDrive
14208 \ "onedrive"
14209 12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
14210 \ "swift"
14211 13 / QingStor Object Storage
14212 \ "qingstor"
14213 14 / SSH/SFTP Connection
14214 \ "sftp"
14215 15 / Yandex Disk
14216 \ "yandex"
14217 Storage> 13
14218 Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
14219 Choose a number from below, or type in your own value
14220 1 / Enter QingStor credentials in the next step
14221 \ "false"
14222 2 / Get QingStor credentials from the environment (env vars or IAM)
14223 \ "true"
14224 env_auth> 1
14225 QingStor Access Key ID - leave blank for anonymous access or runtime credentials.
14226 access_key_id> access_key
14227 QingStor Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
14228 secret_access_key> secret_key
14229 Enter a endpoint URL to connection QingStor API.
14230 Leave blank will use the default value "https://qingstor.com:443"
14231 endpoint>
14232 Zone connect to. Default is "pek3a".
14233 Choose a number from below, or type in your own value
14234 / The Beijing (China) Three Zone
14235 1 | Needs location constraint pek3a.
14236 \ "pek3a"
14237 / The Shanghai (China) First Zone
14238 2 | Needs location constraint sh1a.
14239 \ "sh1a"
14240 zone> 1
14241 Number of connnection retry.
14242 Leave blank will use the default value "3".
14243 connection_retries>
14244 Remote config
14245 --------------------
14246 [remote]
14247 env_auth = false
14248 access_key_id = access_key
14249 secret_access_key = secret_key
14250 endpoint =
14251 zone = pek3a
14252 connection_retries =
14253 --------------------
14254 y) Yes this is OK
14255 e) Edit this remote
14256 d) Delete this remote
14257 y/e/d> y
14258
14259 This remote is called remote and can now be used like this
14260
14261 See all buckets
14262
14263 rclone lsd remote:
14264
14265 Make a new bucket
14266
14267 rclone mkdir remote:bucket
14268
14269 List the contents of a bucket
14270
14271 rclone ls remote:bucket
14272
14273 Sync /home/local/directory to the remote bucket, deleting any excess
14274 files in the bucket.
14275
14276 rclone sync /home/local/directory remote:bucket
14277
14278 –fast-list
14279 This remote supports --fast-list which allows you to use fewer transac‐
14280 tions in exchange for more memory. See the rclone docs (/docs/#fast-
14281 list) for more details.
14282
14283 Multipart uploads
14284 rclone supports multipart uploads with QingStor which means that it can
14285 upload files bigger than 5GB. Note that files uploaded with multipart
14286 upload don't have an MD5SUM.
14287
14288 Buckets and Zone
14289 With QingStor you can list buckets (rclone lsd) using any zone, but you
14290 can only access the content of a bucket from the zone it was created
14291 in. If you attempt to access a bucket from the wrong zone, you will
14292 get an error, incorrect zone, the bucket is not in 'XXX' zone.
14293
14294 Authentication
14295 There are two ways to supply rclone with a set of QingStor credentials.
14296 In order of precedence:
14297
14298 · Directly in the rclone configuration file (as configured by
14299 rclone config)
14300
14301 · set access_key_id and secret_access_key
14302
14303 · Runtime configuration:
14304
14305 · set env_auth to true in the config file
14306
14307 · Exporting the following environment variables before running rclone
14308
14309 · Access Key ID: QS_ACCESS_KEY_ID or QS_ACCESS_KEY
14310
14311 · Secret Access Key: QS_SECRET_ACCESS_KEY or QS_SECRET_KEY
14312
14313 Standard Options
14314 Here are the standard options specific to qingstor (QingCloud Object
14315 Storage).
14316
14317 –qingstor-env-auth
14318 Get QingStor credentials from runtime. Only applies if access_key_id
14319 and secret_access_key is blank.
14320
14321 · Config: env_auth
14322
14323 · Env Var: RCLONE_QINGSTOR_ENV_AUTH
14324
14325 · Type: bool
14326
14327 · Default: false
14328
14329 · Examples:
14330
14331 · “false”
14332
14333 · Enter QingStor credentials in the next step
14334
14335 · “true”
14336
14337 · Get QingStor credentials from the environment (env vars or IAM)
14338
14339 –qingstor-access-key-id
14340 QingStor Access Key ID Leave blank for anonymous access or runtime cre‐
14341 dentials.
14342
14343 · Config: access_key_id
14344
14345 · Env Var: RCLONE_QINGSTOR_ACCESS_KEY_ID
14346
14347 · Type: string
14348
14349 · Default: ""
14350
14351 –qingstor-secret-access-key
14352 QingStor Secret Access Key (password) Leave blank for anonymous access
14353 or runtime credentials.
14354
14355 · Config: secret_access_key
14356
14357 · Env Var: RCLONE_QINGSTOR_SECRET_ACCESS_KEY
14358
14359 · Type: string
14360
14361 · Default: ""
14362
14363 –qingstor-endpoint
14364 Enter a endpoint URL to connection QingStor API. Leave blank will use
14365 the default value “https://qingstor.com:443”
14366
14367 · Config: endpoint
14368
14369 · Env Var: RCLONE_QINGSTOR_ENDPOINT
14370
14371 · Type: string
14372
14373 · Default: ""
14374
14375 –qingstor-zone
14376 Zone to connect to. Default is “pek3a”.
14377
14378 · Config: zone
14379
14380 · Env Var: RCLONE_QINGSTOR_ZONE
14381
14382 · Type: string
14383
14384 · Default: ""
14385
14386 · Examples:
14387
14388 · “pek3a”
14389
14390 · The Beijing (China) Three Zone
14391
14392 · Needs location constraint pek3a.
14393
14394 · “sh1a”
14395
14396 · The Shanghai (China) First Zone
14397
14398 · Needs location constraint sh1a.
14399
14400 · “gd2a”
14401
14402 · The Guangdong (China) Second Zone
14403
14404 · Needs location constraint gd2a.
14405
14406 Advanced Options
14407 Here are the advanced options specific to qingstor (QingCloud Object
14408 Storage).
14409
14410 –qingstor-connection-retries
14411 Number of connection retries.
14412
14413 · Config: connection_retries
14414
14415 · Env Var: RCLONE_QINGSTOR_CONNECTION_RETRIES
14416
14417 · Type: int
14418
14419 · Default: 3
14420
14421 –qingstor-upload-cutoff
14422 Cutoff for switching to chunked upload
14423
14424 Any files larger than this will be uploaded in chunks of chunk_size.
14425 The minimum is 0 and the maximum is 5GB.
14426
14427 · Config: upload_cutoff
14428
14429 · Env Var: RCLONE_QINGSTOR_UPLOAD_CUTOFF
14430
14431 · Type: SizeSuffix
14432
14433 · Default: 200M
14434
14435 –qingstor-chunk-size
14436 Chunk size to use for uploading.
14437
14438 When uploading files larger than upload_cutoff they will be uploaded as
14439 multipart uploads using this chunk size.
14440
14441 Note that “–qingstor-upload-concurrency” chunks of this size are
14442 buffered in memory per transfer.
14443
14444 If you are transferring large files over high speed links and you have
14445 enough memory, then increasing this will speed up the transfers.
14446
14447 · Config: chunk_size
14448
14449 · Env Var: RCLONE_QINGSTOR_CHUNK_SIZE
14450
14451 · Type: SizeSuffix
14452
14453 · Default: 4M
14454
14455 –qingstor-upload-concurrency
14456 Concurrency for multipart uploads.
14457
14458 This is the number of chunks of the same file that are uploaded concur‐
14459 rently.
14460
14461 NB if you set this to > 1 then the checksums of multpart uploads become
14462 corrupted (the uploads themselves are not corrupted though).
14463
14464 If you are uploading small numbers of large file over high speed link
14465 and these uploads do not fully utilize your bandwidth, then increasing
14466 this may help to speed up the transfers.
14467
14468 · Config: upload_concurrency
14469
14470 · Env Var: RCLONE_QINGSTOR_UPLOAD_CONCURRENCY
14471
14472 · Type: int
14473
14474 · Default: 1
14475
14476 Swift
14477 Swift refers to Openstack Object Storage (https://docs.open‐
14478 stack.org/swift/latest/). Commercial implementations of that being:
14479
14480 · Rackspace Cloud Files (https://www.rackspace.com/cloud/files/)
14481
14482 · Memset Memstore (https://www.memset.com/cloud/storage/)
14483
14484 · OVH Object Storage (https://www.ovh.co.uk/public-cloud/storage/ob‐
14485 ject-storage/)
14486
14487 · Oracle Cloud Storage (https://cloud.oracle.com/storage-opc)
14488
14489 · IBM Bluemix Cloud ObjectStorage Swift (https://con‐
14490 sole.bluemix.net/docs/infrastructure/objectstorage-swift/index.html)
14491
14492 Paths are specified as remote:container (or remote: for the lsd com‐
14493 mand.) You may put subdirectories in too, eg remote:contain‐
14494 er/path/to/dir.
14495
14496 Here is an example of making a swift configuration. First run
14497
14498 rclone config
14499
14500 This will guide you through an interactive setup process.
14501
14502 No remotes found - make a new one
14503 n) New remote
14504 s) Set configuration password
14505 q) Quit config
14506 n/s/q> n
14507 name> remote
14508 Type of storage to configure.
14509 Choose a number from below, or type in your own value
14510 1 / Amazon Drive
14511 \ "amazon cloud drive"
14512 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
14513 \ "s3"
14514 3 / Backblaze B2
14515 \ "b2"
14516 4 / Box
14517 \ "box"
14518 5 / Cache a remote
14519 \ "cache"
14520 6 / Dropbox
14521 \ "dropbox"
14522 7 / Encrypt/Decrypt a remote
14523 \ "crypt"
14524 8 / FTP Connection
14525 \ "ftp"
14526 9 / Google Cloud Storage (this is not Google Drive)
14527 \ "google cloud storage"
14528 10 / Google Drive
14529 \ "drive"
14530 11 / Hubic
14531 \ "hubic"
14532 12 / Local Disk
14533 \ "local"
14534 13 / Microsoft Azure Blob Storage
14535 \ "azureblob"
14536 14 / Microsoft OneDrive
14537 \ "onedrive"
14538 15 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
14539 \ "swift"
14540 16 / Pcloud
14541 \ "pcloud"
14542 17 / QingCloud Object Storage
14543 \ "qingstor"
14544 18 / SSH/SFTP Connection
14545 \ "sftp"
14546 19 / Webdav
14547 \ "webdav"
14548 20 / Yandex Disk
14549 \ "yandex"
14550 21 / http Connection
14551 \ "http"
14552 Storage> swift
14553 Get swift credentials from environment variables in standard OpenStack form.
14554 Choose a number from below, or type in your own value
14555 1 / Enter swift credentials in the next step
14556 \ "false"
14557 2 / Get swift credentials from environment vars. Leave other fields blank if using this.
14558 \ "true"
14559 env_auth> true
14560 User name to log in (OS_USERNAME).
14561 user>
14562 API key or password (OS_PASSWORD).
14563 key>
14564 Authentication URL for server (OS_AUTH_URL).
14565 Choose a number from below, or type in your own value
14566 1 / Rackspace US
14567 \ "https://auth.api.rackspacecloud.com/v1.0"
14568 2 / Rackspace UK
14569 \ "https://lon.auth.api.rackspacecloud.com/v1.0"
14570 3 / Rackspace v2
14571 \ "https://identity.api.rackspacecloud.com/v2.0"
14572 4 / Memset Memstore UK
14573 \ "https://auth.storage.memset.com/v1.0"
14574 5 / Memset Memstore UK v2
14575 \ "https://auth.storage.memset.com/v2.0"
14576 6 / OVH
14577 \ "https://auth.cloud.ovh.net/v2.0"
14578 auth>
14579 User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
14580 user_id>
14581 User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
14582 domain>
14583 Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
14584 tenant>
14585 Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
14586 tenant_id>
14587 Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
14588 tenant_domain>
14589 Region name - optional (OS_REGION_NAME)
14590 region>
14591 Storage URL - optional (OS_STORAGE_URL)
14592 storage_url>
14593 Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
14594 auth_token>
14595 AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
14596 auth_version>
14597 Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE)
14598 Choose a number from below, or type in your own value
14599 1 / Public (default, choose this if not sure)
14600 \ "public"
14601 2 / Internal (use internal service net)
14602 \ "internal"
14603 3 / Admin
14604 \ "admin"
14605 endpoint_type>
14606 Remote config
14607 --------------------
14608 [test]
14609 env_auth = true
14610 user =
14611 key =
14612 auth =
14613 user_id =
14614 domain =
14615 tenant =
14616 tenant_id =
14617 tenant_domain =
14618 region =
14619 storage_url =
14620 auth_token =
14621 auth_version =
14622 endpoint_type =
14623 --------------------
14624 y) Yes this is OK
14625 e) Edit this remote
14626 d) Delete this remote
14627 y/e/d> y
14628
14629 This remote is called remote and can now be used like this
14630
14631 See all containers
14632
14633 rclone lsd remote:
14634
14635 Make a new container
14636
14637 rclone mkdir remote:container
14638
14639 List the contents of a container
14640
14641 rclone ls remote:container
14642
14643 Sync /home/local/directory to the remote container, deleting any excess
14644 files in the container.
14645
14646 rclone sync /home/local/directory remote:container
14647
14648 Configuration from an OpenStack credentials file
14649 An OpenStack credentials file typically looks something something like
14650 this (without the comments)
14651
14652 export OS_AUTH_URL=https://a.provider.net/v2.0
14653 export OS_TENANT_ID=ffffffffffffffffffffffffffffffff
14654 export OS_TENANT_NAME="1234567890123456"
14655 export OS_USERNAME="123abc567xy"
14656 echo "Please enter your OpenStack Password: "
14657 read -sr OS_PASSWORD_INPUT
14658 export OS_PASSWORD=$OS_PASSWORD_INPUT
14659 export OS_REGION_NAME="SBG1"
14660 if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi
14661
14662 The config file needs to look something like this where $OS_USERNAME
14663 represents the value of the OS_USERNAME variable - 123abc567xy in the
14664 example above.
14665
14666 [remote]
14667 type = swift
14668 user = $OS_USERNAME
14669 key = $OS_PASSWORD
14670 auth = $OS_AUTH_URL
14671 tenant = $OS_TENANT_NAME
14672
14673 Note that you may (or may not) need to set region too - try without
14674 first.
14675
14676 Configuration from the environment
14677 If you prefer you can configure rclone to use swift using a standard
14678 set of OpenStack environment variables.
14679
14680 When you run through the config, make sure you choose true for env_auth
14681 and leave everything else blank.
14682
14683 rclone will then set any empty config parameters from the environment
14684 using standard OpenStack environment variables. There is a list of the
14685 variables (https://godoc.org/github.com/ncw/swift#Connection.ApplyEnvi‐
14686 ronment) in the docs for the swift library.
14687
14688 Using an alternate authentication method
14689 If your OpenStack installation uses a non-standard authentication
14690 method that might not be yet supported by rclone or the underlying
14691 swift library, you can authenticate externally (e.g. calling manually
14692 the openstack commands to get a token). Then, you just need to pass
14693 the two configuration variables auth_token and storage_url. If they
14694 are both provided, the other variables are ignored. rclone will not
14695 try to authenticate but instead assume it is already authenticated and
14696 use these two variables to access the OpenStack installation.
14697
14698 Using rclone without a config file
14699 You can use rclone with swift without a config file, if desired, like
14700 this:
14701
14702 source openstack-credentials-file
14703 export RCLONE_CONFIG_MYREMOTE_TYPE=swift
14704 export RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true
14705 rclone lsd myremote:
14706
14707 –fast-list
14708 This remote supports --fast-list which allows you to use fewer transac‐
14709 tions in exchange for more memory. See the rclone docs (/docs/#fast-
14710 list) for more details.
14711
14712 –update and –use-server-modtime
14713 As noted below, the modified time is stored on metadata on the object.
14714 It is used by default for all operations that require checking the time
14715 a file was last updated. It allows rclone to treat the remote more
14716 like a true filesystem, but it is inefficient because it requires an
14717 extra API call to retrieve the metadata.
14718
14719 For many operations, the time the object was last uploaded to the re‐
14720 mote is sufficient to determine if it is “dirty”. By using --update
14721 along with --use-server-modtime, you can avoid the extra API call and
14722 simply upload files whose local modtime is newer than the time it was
14723 last uploaded.
14724
14725 Standard Options
14726 Here are the standard options specific to swift (Openstack Swift
14727 (Rackspace Cloud Files, Memset Memstore, OVH)).
14728
14729 –swift-env-auth
14730 Get swift credentials from environment variables in standard OpenStack
14731 form.
14732
14733 · Config: env_auth
14734
14735 · Env Var: RCLONE_SWIFT_ENV_AUTH
14736
14737 · Type: bool
14738
14739 · Default: false
14740
14741 · Examples:
14742
14743 · “false”
14744
14745 · Enter swift credentials in the next step
14746
14747 · “true”
14748
14749 · Get swift credentials from environment vars. Leave other fields
14750 blank if using this.
14751
14752 –swift-user
14753 User name to log in (OS_USERNAME).
14754
14755 · Config: user
14756
14757 · Env Var: RCLONE_SWIFT_USER
14758
14759 · Type: string
14760
14761 · Default: ""
14762
14763 –swift-key
14764 API key or password (OS_PASSWORD).
14765
14766 · Config: key
14767
14768 · Env Var: RCLONE_SWIFT_KEY
14769
14770 · Type: string
14771
14772 · Default: ""
14773
14774 –swift-auth
14775 Authentication URL for server (OS_AUTH_URL).
14776
14777 · Config: auth
14778
14779 · Env Var: RCLONE_SWIFT_AUTH
14780
14781 · Type: string
14782
14783 · Default: ""
14784
14785 · Examples:
14786
14787 · “https://auth.api.rackspacecloud.com/v1.0”
14788
14789 · Rackspace US
14790
14791 · “https://lon.auth.api.rackspacecloud.com/v1.0”
14792
14793 · Rackspace UK
14794
14795 · “https://identity.api.rackspacecloud.com/v2.0”
14796
14797 · Rackspace v2
14798
14799 · “https://auth.storage.memset.com/v1.0”
14800
14801 · Memset Memstore UK
14802
14803 · “https://auth.storage.memset.com/v2.0”
14804
14805 · Memset Memstore UK v2
14806
14807 · “https://auth.cloud.ovh.net/v2.0”
14808
14809 · OVH
14810
14811 –swift-user-id
14812 User ID to log in - optional - most swift systems use user and leave
14813 this blank (v3 auth) (OS_USER_ID).
14814
14815 · Config: user_id
14816
14817 · Env Var: RCLONE_SWIFT_USER_ID
14818
14819 · Type: string
14820
14821 · Default: ""
14822
14823 –swift-domain
14824 User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
14825
14826 · Config: domain
14827
14828 · Env Var: RCLONE_SWIFT_DOMAIN
14829
14830 · Type: string
14831
14832 · Default: ""
14833
14834 –swift-tenant
14835 Tenant name - optional for v1 auth, this or tenant_id required other‐
14836 wise (OS_TENANT_NAME or OS_PROJECT_NAME)
14837
14838 · Config: tenant
14839
14840 · Env Var: RCLONE_SWIFT_TENANT
14841
14842 · Type: string
14843
14844 · Default: ""
14845
14846 –swift-tenant-id
14847 Tenant ID - optional for v1 auth, this or tenant required otherwise
14848 (OS_TENANT_ID)
14849
14850 · Config: tenant_id
14851
14852 · Env Var: RCLONE_SWIFT_TENANT_ID
14853
14854 · Type: string
14855
14856 · Default: ""
14857
14858 –swift-tenant-domain
14859 Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
14860
14861 · Config: tenant_domain
14862
14863 · Env Var: RCLONE_SWIFT_TENANT_DOMAIN
14864
14865 · Type: string
14866
14867 · Default: ""
14868
14869 –swift-region
14870 Region name - optional (OS_REGION_NAME)
14871
14872 · Config: region
14873
14874 · Env Var: RCLONE_SWIFT_REGION
14875
14876 · Type: string
14877
14878 · Default: ""
14879
14880 –swift-storage-url
14881 Storage URL - optional (OS_STORAGE_URL)
14882
14883 · Config: storage_url
14884
14885 · Env Var: RCLONE_SWIFT_STORAGE_URL
14886
14887 · Type: string
14888
14889 · Default: ""
14890
14891 –swift-auth-token
14892 Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
14893
14894 · Config: auth_token
14895
14896 · Env Var: RCLONE_SWIFT_AUTH_TOKEN
14897
14898 · Type: string
14899
14900 · Default: ""
14901
14902 –swift-application-credential-id
14903 Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
14904
14905 · Config: application_credential_id
14906
14907 · Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_ID
14908
14909 · Type: string
14910
14911 · Default: ""
14912
14913 –swift-application-credential-name
14914 Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
14915
14916 · Config: application_credential_name
14917
14918 · Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_NAME
14919
14920 · Type: string
14921
14922 · Default: ""
14923
14924 –swift-application-credential-secret
14925 Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
14926
14927 · Config: application_credential_secret
14928
14929 · Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_SECRET
14930
14931 · Type: string
14932
14933 · Default: ""
14934
14935 –swift-auth-version
14936 AuthVersion - optional - set to (1,2,3) if your auth URL has no version
14937 (ST_AUTH_VERSION)
14938
14939 · Config: auth_version
14940
14941 · Env Var: RCLONE_SWIFT_AUTH_VERSION
14942
14943 · Type: int
14944
14945 · Default: 0
14946
14947 –swift-endpoint-type
14948 Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE)
14949
14950 · Config: endpoint_type
14951
14952 · Env Var: RCLONE_SWIFT_ENDPOINT_TYPE
14953
14954 · Type: string
14955
14956 · Default: “public”
14957
14958 · Examples:
14959
14960 · “public”
14961
14962 · Public (default, choose this if not sure)
14963
14964 · “internal”
14965
14966 · Internal (use internal service net)
14967
14968 · “admin”
14969
14970 · Admin
14971
14972 –swift-storage-policy
14973 The storage policy to use when creating a new container
14974
14975 This applies the specified storage policy when creating a new contain‐
14976 er. The policy cannot be changed afterwards. The allowed configura‐
14977 tion values and their meaning depend on your Swift storage provider.
14978
14979 · Config: storage_policy
14980
14981 · Env Var: RCLONE_SWIFT_STORAGE_POLICY
14982
14983 · Type: string
14984
14985 · Default: ""
14986
14987 · Examples:
14988
14989 · ""
14990
14991 · Default
14992
14993 · “pcs”
14994
14995 · OVH Public Cloud Storage
14996
14997 · “pca”
14998
14999 · OVH Public Cloud Archive
15000
15001 Advanced Options
15002 Here are the advanced options specific to swift (Openstack Swift
15003 (Rackspace Cloud Files, Memset Memstore, OVH)).
15004
15005 –swift-chunk-size
15006 Above this size files will be chunked into a _segments container.
15007
15008 Above this size files will be chunked into a _segments container. The
15009 default for this is 5GB which is its maximum value.
15010
15011 · Config: chunk_size
15012
15013 · Env Var: RCLONE_SWIFT_CHUNK_SIZE
15014
15015 · Type: SizeSuffix
15016
15017 · Default: 5G
15018
15019 –swift-no-chunk
15020 Don't chunk files during streaming upload.
15021
15022 When doing streaming uploads (eg using rcat or mount) setting this flag
15023 will cause the swift backend to not upload chunked files.
15024
15025 This will limit the maximum upload size to 5GB. However non chunked
15026 files are easier to deal with and have an MD5SUM.
15027
15028 Rclone will still chunk files bigger than chunk_size when doing normal
15029 copy operations.
15030
15031 · Config: no_chunk
15032
15033 · Env Var: RCLONE_SWIFT_NO_CHUNK
15034
15035 · Type: bool
15036
15037 · Default: false
15038
15039 Modified time
15040 The modified time is stored as metadata on the object as X-Ob‐
15041 ject-Meta-Mtime as floating point since the epoch accurate to 1 ns.
15042
15043 This is a defacto standard (used in the official python-swiftclient
15044 amongst others) for storing the modification time for an object.
15045
15046 Limitations
15047 The Swift API doesn't return a correct MD5SUM for segmented files (Dy‐
15048 namic or Static Large Objects) so rclone won't check or use the MD5SUM
15049 for these.
15050
15051 Troubleshooting
15052 Rclone gives Failed to create file system for “remote:”: Bad
15053 Request
15054
15055 Due to an oddity of the underlying swift library, it gives a “Bad Re‐
15056 quest” error rather than a more sensible error when the authentication
15057 fails for Swift.
15058
15059 So this most likely means your username / password is wrong. You can
15060 investigate further with the --dump-bodies flag.
15061
15062 This may also be caused by specifying the region when you shouldn't
15063 have (eg OVH).
15064
15065 Rclone gives Failed to create file system: Response didn't have
15066 storage storage url and auth token
15067
15068 This is most likely caused by forgetting to specify your tenant when
15069 setting up a swift remote.
15070
15071 pCloud
15072 Paths are specified as remote:path
15073
15074 Paths may be as deep as required, eg remote:directory/subdirectory.
15075
15076 The initial setup for pCloud involves getting a token from pCloud which
15077 you need to do in your browser. rclone config walks you through it.
15078
15079 Here is an example of how to make a remote called remote. First run:
15080
15081 rclone config
15082
15083 This will guide you through an interactive setup process:
15084
15085 No remotes found - make a new one
15086 n) New remote
15087 s) Set configuration password
15088 q) Quit config
15089 n/s/q> n
15090 name> remote
15091 Type of storage to configure.
15092 Choose a number from below, or type in your own value
15093 1 / Amazon Drive
15094 \ "amazon cloud drive"
15095 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
15096 \ "s3"
15097 3 / Backblaze B2
15098 \ "b2"
15099 4 / Box
15100 \ "box"
15101 5 / Dropbox
15102 \ "dropbox"
15103 6 / Encrypt/Decrypt a remote
15104 \ "crypt"
15105 7 / FTP Connection
15106 \ "ftp"
15107 8 / Google Cloud Storage (this is not Google Drive)
15108 \ "google cloud storage"
15109 9 / Google Drive
15110 \ "drive"
15111 10 / Hubic
15112 \ "hubic"
15113 11 / Local Disk
15114 \ "local"
15115 12 / Microsoft Azure Blob Storage
15116 \ "azureblob"
15117 13 / Microsoft OneDrive
15118 \ "onedrive"
15119 14 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
15120 \ "swift"
15121 15 / Pcloud
15122 \ "pcloud"
15123 16 / QingCloud Object Storage
15124 \ "qingstor"
15125 17 / SSH/SFTP Connection
15126 \ "sftp"
15127 18 / Yandex Disk
15128 \ "yandex"
15129 19 / http Connection
15130 \ "http"
15131 Storage> pcloud
15132 Pcloud App Client Id - leave blank normally.
15133 client_id>
15134 Pcloud App Client Secret - leave blank normally.
15135 client_secret>
15136 Remote config
15137 Use auto config?
15138 * Say Y if not sure
15139 * Say N if you are working on a remote or headless machine
15140 y) Yes
15141 n) No
15142 y/n> y
15143 If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
15144 Log in and authorize rclone for access
15145 Waiting for code...
15146 Got code
15147 --------------------
15148 [remote]
15149 client_id =
15150 client_secret =
15151 token = {"access_token":"XXX","token_type":"bearer","expiry":"0001-01-01T00:00:00Z"}
15152 --------------------
15153 y) Yes this is OK
15154 e) Edit this remote
15155 d) Delete this remote
15156 y/e/d> y
15157
15158 See the remote setup docs (https://rclone.org/remote_setup/) for how to
15159 set it up on a machine with no Internet browser available.
15160
15161 Note that rclone runs a webserver on your local machine to collect the
15162 token as returned from pCloud. This only runs from the moment it opens
15163 your browser to the moment you get back the verification code. This is
15164 on http://127.0.0.1:53682/ and this it may require you to unblock it
15165 temporarily if you are running a host firewall.
15166
15167 Once configured you can then use rclone like this,
15168
15169 List directories in top level of your pCloud
15170
15171 rclone lsd remote:
15172
15173 List all the files in your pCloud
15174
15175 rclone ls remote:
15176
15177 To copy a local directory to an pCloud directory called backup
15178
15179 rclone copy /home/source remote:backup
15180
15181 Modified time and hashes
15182 pCloud allows modification times to be set on objects accurate to 1
15183 second. These will be used to detect whether objects need syncing or
15184 not. In order to set a Modification time pCloud requires the object be
15185 re-uploaded.
15186
15187 pCloud supports MD5 and SHA1 type hashes, so you can use the --checksum
15188 flag.
15189
15190 Deleting files
15191 Deleted files will be moved to the trash. Your subscription level will
15192 determine how long items stay in the trash. rclone cleanup can be used
15193 to empty the trash.
15194
15195 Standard Options
15196 Here are the standard options specific to pcloud (Pcloud).
15197
15198 –pcloud-client-id
15199 Pcloud App Client Id Leave blank normally.
15200
15201 · Config: client_id
15202
15203 · Env Var: RCLONE_PCLOUD_CLIENT_ID
15204
15205 · Type: string
15206
15207 · Default: ""
15208
15209 –pcloud-client-secret
15210 Pcloud App Client Secret Leave blank normally.
15211
15212 · Config: client_secret
15213
15214 · Env Var: RCLONE_PCLOUD_CLIENT_SECRET
15215
15216 · Type: string
15217
15218 · Default: ""
15219
15220 SFTP
15221 SFTP is the Secure (or SSH) File Transfer Protocol
15222 (https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol).
15223
15224 SFTP runs over SSH v2 and is installed as standard with most modern SSH
15225 installations.
15226
15227 Paths are specified as remote:path. If the path does not begin with a
15228 / it is relative to the home directory of the user. An empty path re‐
15229 mote: refers to the user's home directory.
15230
15231 Note that some SFTP servers will need the leading / - Synology is a
15232 good example of this.
15233
15234 Here is an example of making an SFTP configuration. First run
15235
15236 rclone config
15237
15238 This will guide you through an interactive setup process.
15239
15240 No remotes found - make a new one
15241 n) New remote
15242 s) Set configuration password
15243 q) Quit config
15244 n/s/q> n
15245 name> remote
15246 Type of storage to configure.
15247 Choose a number from below, or type in your own value
15248 1 / Amazon Drive
15249 \ "amazon cloud drive"
15250 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
15251 \ "s3"
15252 3 / Backblaze B2
15253 \ "b2"
15254 4 / Dropbox
15255 \ "dropbox"
15256 5 / Encrypt/Decrypt a remote
15257 \ "crypt"
15258 6 / FTP Connection
15259 \ "ftp"
15260 7 / Google Cloud Storage (this is not Google Drive)
15261 \ "google cloud storage"
15262 8 / Google Drive
15263 \ "drive"
15264 9 / Hubic
15265 \ "hubic"
15266 10 / Local Disk
15267 \ "local"
15268 11 / Microsoft OneDrive
15269 \ "onedrive"
15270 12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
15271 \ "swift"
15272 13 / SSH/SFTP Connection
15273 \ "sftp"
15274 14 / Yandex Disk
15275 \ "yandex"
15276 15 / http Connection
15277 \ "http"
15278 Storage> sftp
15279 SSH host to connect to
15280 Choose a number from below, or type in your own value
15281 1 / Connect to example.com
15282 \ "example.com"
15283 host> example.com
15284 SSH username, leave blank for current username, ncw
15285 user> sftpuser
15286 SSH port, leave blank to use default (22)
15287 port>
15288 SSH password, leave blank to use ssh-agent.
15289 y) Yes type in my own password
15290 g) Generate random password
15291 n) No leave this optional password blank
15292 y/g/n> n
15293 Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
15294 key_file>
15295 Remote config
15296 --------------------
15297 [remote]
15298 host = example.com
15299 user = sftpuser
15300 port =
15301 pass =
15302 key_file =
15303 --------------------
15304 y) Yes this is OK
15305 e) Edit this remote
15306 d) Delete this remote
15307 y/e/d> y
15308
15309 This remote is called remote and can now be used like this:
15310
15311 See all directories in the home directory
15312
15313 rclone lsd remote:
15314
15315 Make a new directory
15316
15317 rclone mkdir remote:path/to/directory
15318
15319 List the contents of a directory
15320
15321 rclone ls remote:path/to/directory
15322
15323 Sync /home/local/directory to the remote directory, deleting any excess
15324 files in the directory.
15325
15326 rclone sync /home/local/directory remote:directory
15327
15328 SSH Authentication
15329 The SFTP remote supports three authentication methods:
15330
15331 · Password
15332
15333 · Key file
15334
15335 · ssh-agent
15336
15337 Key files should be PEM-encoded private key files. For instance
15338 /home/$USER/.ssh/id_rsa. Only unencrypted OpenSSH or PEM encrypted
15339 files are supported.
15340
15341 If you don't specify pass or key_file then rclone will attempt to con‐
15342 tact an ssh-agent.
15343
15344 You can also specify key_use_agent to force the usage of an ssh-agent.
15345 In this case key_file can also be specified to force the usage of a
15346 specific key in the ssh-agent.
15347
15348 Using an ssh-agent is the only way to load encrypted OpenSSH keys at
15349 the moment.
15350
15351 If you set the --sftp-ask-password option, rclone will prompt for a
15352 password when needed and no password has been configured.
15353
15354 ssh-agent on macOS
15355 Note that there seem to be various problems with using an ssh-agent on
15356 macOS due to recent changes in the OS. The most effective work-around
15357 seems to be to start an ssh-agent in each session, eg
15358
15359 eval `ssh-agent -s` && ssh-add -A
15360
15361 And then at the end of the session
15362
15363 eval `ssh-agent -k`
15364
15365 These commands can be used in scripts of course.
15366
15367 Modified time
15368 Modified times are stored on the server to 1 second precision.
15369
15370 Modified times are used in syncing and are fully supported.
15371
15372 Some SFTP servers disable setting/modifying the file modification time
15373 after upload (for example, certain configurations of ProFTPd with
15374 mod_sftp). If you are using one of these servers, you can set the op‐
15375 tion set_modtime = false in your RClone backend configuration to dis‐
15376 able this behaviour.
15377
15378 Standard Options
15379 Here are the standard options specific to sftp (SSH/SFTP Connection).
15380
15381 –sftp-host
15382 SSH host to connect to
15383
15384 · Config: host
15385
15386 · Env Var: RCLONE_SFTP_HOST
15387
15388 · Type: string
15389
15390 · Default: ""
15391
15392 · Examples:
15393
15394 · “example.com”
15395
15396 · Connect to example.com
15397
15398 –sftp-user
15399 SSH username, leave blank for current username, ncw
15400
15401 · Config: user
15402
15403 · Env Var: RCLONE_SFTP_USER
15404
15405 · Type: string
15406
15407 · Default: ""
15408
15409 –sftp-port
15410 SSH port, leave blank to use default (22)
15411
15412 · Config: port
15413
15414 · Env Var: RCLONE_SFTP_PORT
15415
15416 · Type: string
15417
15418 · Default: ""
15419
15420 –sftp-pass
15421 SSH password, leave blank to use ssh-agent.
15422
15423 · Config: pass
15424
15425 · Env Var: RCLONE_SFTP_PASS
15426
15427 · Type: string
15428
15429 · Default: ""
15430
15431 –sftp-key-file
15432 Path to PEM-encoded private key file, leave blank or set key-use-agent
15433 to use ssh-agent.
15434
15435 · Config: key_file
15436
15437 · Env Var: RCLONE_SFTP_KEY_FILE
15438
15439 · Type: string
15440
15441 · Default: ""
15442
15443 –sftp-key-file-pass
15444 The passphrase to decrypt the PEM-encoded private key file.
15445
15446 Only PEM encrypted key files (old OpenSSH format) are supported. En‐
15447 crypted keys in the new OpenSSH format can't be used.
15448
15449 · Config: key_file_pass
15450
15451 · Env Var: RCLONE_SFTP_KEY_FILE_PASS
15452
15453 · Type: string
15454
15455 · Default: ""
15456
15457 –sftp-key-use-agent
15458 When set forces the usage of the ssh-agent.
15459
15460 When key-file is also set, the “.pub” file of the specified key-file is
15461 read and only the associated key is requested from the ssh-agent. This
15462 allows to avoid Too many authentication failures for *username* errors
15463 when the ssh-agent contains many keys.
15464
15465 · Config: key_use_agent
15466
15467 · Env Var: RCLONE_SFTP_KEY_USE_AGENT
15468
15469 · Type: bool
15470
15471 · Default: false
15472
15473 –sftp-use-insecure-cipher
15474 Enable the use of the aes128-cbc cipher. This cipher is insecure and
15475 may allow plaintext data to be recovered by an attacker.
15476
15477 · Config: use_insecure_cipher
15478
15479 · Env Var: RCLONE_SFTP_USE_INSECURE_CIPHER
15480
15481 · Type: bool
15482
15483 · Default: false
15484
15485 · Examples:
15486
15487 · “false”
15488
15489 · Use default Cipher list.
15490
15491 · “true”
15492
15493 · Enables the use of the aes128-cbc cipher.
15494
15495 –sftp-disable-hashcheck
15496 Disable the execution of SSH commands to determine if remote file hash‐
15497 ing is available. Leave blank or set to false to enable hashing (rec‐
15498 ommended), set to true to disable hashing.
15499
15500 · Config: disable_hashcheck
15501
15502 · Env Var: RCLONE_SFTP_DISABLE_HASHCHECK
15503
15504 · Type: bool
15505
15506 · Default: false
15507
15508 Advanced Options
15509 Here are the advanced options specific to sftp (SSH/SFTP Connection).
15510
15511 –sftp-ask-password
15512 Allow asking for SFTP password when needed.
15513
15514 · Config: ask_password
15515
15516 · Env Var: RCLONE_SFTP_ASK_PASSWORD
15517
15518 · Type: bool
15519
15520 · Default: false
15521
15522 –sftp-path-override
15523 Override path used by SSH connection.
15524
15525 This allows checksum calculation when SFTP and SSH paths are different.
15526 This issue affects among others Synology NAS boxes.
15527
15528 Shared folders can be found in directories representing volumes
15529
15530 rclone sync /home/local/directory remote:/directory --ssh-path-override /volume2/directory
15531
15532 Home directory can be found in a shared folder called “home”
15533
15534 rclone sync /home/local/directory remote:/home/directory --ssh-path-override /volume1/homes/USER/directory
15535
15536 · Config: path_override
15537
15538 · Env Var: RCLONE_SFTP_PATH_OVERRIDE
15539
15540 · Type: string
15541
15542 · Default: ""
15543
15544 –sftp-set-modtime
15545 Set the modified time on the remote if set.
15546
15547 · Config: set_modtime
15548
15549 · Env Var: RCLONE_SFTP_SET_MODTIME
15550
15551 · Type: bool
15552
15553 · Default: true
15554
15555 Limitations
15556 SFTP supports checksums if the same login has shell access and md5sum
15557 or sha1sum as well as echo are in the remote's PATH. This remote
15558 checksumming (file hashing) is recommended and enabled by default.
15559 Disabling the checksumming may be required if you are connecting to
15560 SFTP servers which are not under your control, and to which the execu‐
15561 tion of remote commands is prohibited. Set the configuration option
15562 disable_hashcheck to true to disable checksumming.
15563
15564 Note that some SFTP servers (eg Synology) the paths are different for
15565 SSH and SFTP so the hashes can't be calculated properly. For them us‐
15566 ing disable_hashcheck is a good idea.
15567
15568 The only ssh agent supported under Windows is Putty's pageant.
15569
15570 The Go SSH library disables the use of the aes128-cbc cipher by de‐
15571 fault, due to security concerns. This can be re-enabled on a per-con‐
15572 nection basis by setting the use_insecure_cipher setting in the config‐
15573 uration file to true. Further details on the insecurity of this cipher
15574 can be found [in this paper] (http://www.isg.rhul.ac.uk/~kp/SandPfi‐
15575 nal.pdf).
15576
15577 SFTP isn't supported under plan9 until this issue
15578 (https://github.com/pkg/sftp/issues/156) is fixed.
15579
15580 Note that since SFTP isn't HTTP based the following flags don't work
15581 with it: --dump-headers, --dump-bodies, --dump-auth
15582
15583 Note that --timeout isn't supported (but --contimeout is).
15584
15585 Union
15586 The union remote provides a unification similar to UnionFS using other
15587 remotes.
15588
15589 Paths may be as deep as required or a local path, eg remote:directo‐
15590 ry/subdirectory or /directory/subdirectory.
15591
15592 During the initial setup with rclone config you will specify the target
15593 remotes as a space separated list. The target remotes can either be a
15594 local paths or other remotes.
15595
15596 The order of the remotes is important as it defines which remotes take
15597 precedence over others if there are files with the same name in the
15598 same logical path. The last remote is the topmost remote and replaces
15599 files with the same name from previous remotes.
15600
15601 Only the last remote is used to write to and delete from, all other re‐
15602 motes are read-only.
15603
15604 Subfolders can be used in target remote. Assume a union remote named
15605 backup with the remotes mydrive:private/backup mydrive2:/backup. In‐
15606 voking rclone mkdir backup:desktop is exactly the same as invoking
15607 rclone mkdir mydrive2:/backup/desktop.
15608
15609 There will be no special handling of paths containing .. segments.
15610 Invoking rclone mkdir backup:../desktop is exactly the same as invoking
15611 rclone mkdir mydrive2:/backup/../desktop.
15612
15613 Here is an example of how to make a union called remote for local fold‐
15614 ers. First run:
15615
15616 rclone config
15617
15618 This will guide you through an interactive setup process:
15619
15620 No remotes found - make a new one
15621 n) New remote
15622 s) Set configuration password
15623 q) Quit config
15624 n/s/q> n
15625 name> remote
15626 Type of storage to configure.
15627 Choose a number from below, or type in your own value
15628 1 / Alias for a existing remote
15629 \ "alias"
15630 2 / Amazon Drive
15631 \ "amazon cloud drive"
15632 3 / Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio)
15633 \ "s3"
15634 4 / Backblaze B2
15635 \ "b2"
15636 5 / Box
15637 \ "box"
15638 6 / Builds a stackable unification remote, which can appear to merge the contents of several remotes
15639 \ "union"
15640 7 / Cache a remote
15641 \ "cache"
15642 8 / Dropbox
15643 \ "dropbox"
15644 9 / Encrypt/Decrypt a remote
15645 \ "crypt"
15646 10 / FTP Connection
15647 \ "ftp"
15648 11 / Google Cloud Storage (this is not Google Drive)
15649 \ "google cloud storage"
15650 12 / Google Drive
15651 \ "drive"
15652 13 / Hubic
15653 \ "hubic"
15654 14 / JottaCloud
15655 \ "jottacloud"
15656 15 / Local Disk
15657 \ "local"
15658 16 / Mega
15659 \ "mega"
15660 17 / Microsoft Azure Blob Storage
15661 \ "azureblob"
15662 18 / Microsoft OneDrive
15663 \ "onedrive"
15664 19 / OpenDrive
15665 \ "opendrive"
15666 20 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
15667 \ "swift"
15668 21 / Pcloud
15669 \ "pcloud"
15670 22 / QingCloud Object Storage
15671 \ "qingstor"
15672 23 / SSH/SFTP Connection
15673 \ "sftp"
15674 24 / Webdav
15675 \ "webdav"
15676 25 / Yandex Disk
15677 \ "yandex"
15678 26 / http Connection
15679 \ "http"
15680 Storage> union
15681 List of space separated remotes.
15682 Can be 'remotea:test/dir remoteb:', '"remotea:test/space dir" remoteb:', etc.
15683 The last remote is used to write to.
15684 Enter a string value. Press Enter for the default ("").
15685 remotes>
15686 Remote config
15687 --------------------
15688 [remote]
15689 type = union
15690 remotes = C:\dir1 C:\dir2 C:\dir3
15691 --------------------
15692 y) Yes this is OK
15693 e) Edit this remote
15694 d) Delete this remote
15695 y/e/d> y
15696 Current remotes:
15697
15698 Name Type
15699 ==== ====
15700 remote union
15701
15702 e) Edit existing remote
15703 n) New remote
15704 d) Delete remote
15705 r) Rename remote
15706 c) Copy remote
15707 s) Set configuration password
15708 q) Quit config
15709 e/n/d/r/c/s/q> q
15710
15711 Once configured you can then use rclone like this,
15712
15713 List directories in top level in C:\dir1, C:\dir2 and C:\dir3
15714
15715 rclone lsd remote:
15716
15717 List all the files in C:\dir1, C:\dir2 and C:\dir3
15718
15719 rclone ls remote:
15720
15721 Copy another local directory to the union directory called source,
15722 which will be placed into C:\dir3
15723
15724 rclone copy C:\source remote:source
15725
15726 Standard Options
15727 Here are the standard options specific to union (A stackable unifica‐
15728 tion remote, which can appear to merge the contents of several re‐
15729 motes).
15730
15731 –union-remotes
15732 List of space separated remotes. Can be `remotea:test/dir remoteb:',
15733 `“remotea:test/space dir” remoteb:', etc. The last remote is used to
15734 write to.
15735
15736 · Config: remotes
15737
15738 · Env Var: RCLONE_UNION_REMOTES
15739
15740 · Type: string
15741
15742 · Default: ""
15743
15744 WebDAV
15745 Paths are specified as remote:path
15746
15747 Paths may be as deep as required, eg remote:directory/subdirectory.
15748
15749 To configure the WebDAV remote you will need to have a URL for it, and
15750 a username and password. If you know what kind of system you are con‐
15751 necting to then rclone can enable extra features.
15752
15753 Here is an example of how to make a remote called remote. First run:
15754
15755 rclone config
15756
15757 This will guide you through an interactive setup process:
15758
15759 No remotes found - make a new one
15760 n) New remote
15761 s) Set configuration password
15762 q) Quit config
15763 n/s/q> n
15764 name> remote
15765 Type of storage to configure.
15766 Choose a number from below, or type in your own value
15767 [snip]
15768 22 / Webdav
15769 \ "webdav"
15770 [snip]
15771 Storage> webdav
15772 URL of http host to connect to
15773 Choose a number from below, or type in your own value
15774 1 / Connect to example.com
15775 \ "https://example.com"
15776 url> https://example.com/remote.php/webdav/
15777 Name of the Webdav site/service/software you are using
15778 Choose a number from below, or type in your own value
15779 1 / Nextcloud
15780 \ "nextcloud"
15781 2 / Owncloud
15782 \ "owncloud"
15783 3 / Sharepoint
15784 \ "sharepoint"
15785 4 / Other site/service or software
15786 \ "other"
15787 vendor> 1
15788 User name
15789 user> user
15790 Password.
15791 y) Yes type in my own password
15792 g) Generate random password
15793 n) No leave this optional password blank
15794 y/g/n> y
15795 Enter the password:
15796 password:
15797 Confirm the password:
15798 password:
15799 Bearer token instead of user/pass (eg a Macaroon)
15800 bearer_token>
15801 Remote config
15802 --------------------
15803 [remote]
15804 type = webdav
15805 url = https://example.com/remote.php/webdav/
15806 vendor = nextcloud
15807 user = user
15808 pass = *** ENCRYPTED ***
15809 bearer_token =
15810 --------------------
15811 y) Yes this is OK
15812 e) Edit this remote
15813 d) Delete this remote
15814 y/e/d> y
15815
15816 Once configured you can then use rclone like this,
15817
15818 List directories in top level of your WebDAV
15819
15820 rclone lsd remote:
15821
15822 List all the files in your WebDAV
15823
15824 rclone ls remote:
15825
15826 To copy a local directory to an WebDAV directory called backup
15827
15828 rclone copy /home/source remote:backup
15829
15830 Modified time and hashes
15831 Plain WebDAV does not support modified times. However when used with
15832 Owncloud or Nextcloud rclone will support modified times.
15833
15834 Likewise plain WebDAV does not support hashes, however when used with
15835 Owncloud or Nextcloud rclone will support SHA1 and MD5 hashes. Depend‐
15836 ing on the exact version of Owncloud or Nextcloud hashes may appear on
15837 all objects, or only on objects which had a hash uploaded with them.
15838
15839 Standard Options
15840 Here are the standard options specific to webdav (Webdav).
15841
15842 –webdav-url
15843 URL of http host to connect to
15844
15845 · Config: url
15846
15847 · Env Var: RCLONE_WEBDAV_URL
15848
15849 · Type: string
15850
15851 · Default: ""
15852
15853 · Examples:
15854
15855 · “https://example.com”
15856
15857 · Connect to example.com
15858
15859 –webdav-vendor
15860 Name of the Webdav site/service/software you are using
15861
15862 · Config: vendor
15863
15864 · Env Var: RCLONE_WEBDAV_VENDOR
15865
15866 · Type: string
15867
15868 · Default: ""
15869
15870 · Examples:
15871
15872 · “nextcloud”
15873
15874 · Nextcloud
15875
15876 · “owncloud”
15877
15878 · Owncloud
15879
15880 · “sharepoint”
15881
15882 · Sharepoint
15883
15884 · “other”
15885
15886 · Other site/service or software
15887
15888 –webdav-user
15889 User name
15890
15891 · Config: user
15892
15893 · Env Var: RCLONE_WEBDAV_USER
15894
15895 · Type: string
15896
15897 · Default: ""
15898
15899 –webdav-pass
15900 Password.
15901
15902 · Config: pass
15903
15904 · Env Var: RCLONE_WEBDAV_PASS
15905
15906 · Type: string
15907
15908 · Default: ""
15909
15910 –webdav-bearer-token
15911 Bearer token instead of user/pass (eg a Macaroon)
15912
15913 · Config: bearer_token
15914
15915 · Env Var: RCLONE_WEBDAV_BEARER_TOKEN
15916
15917 · Type: string
15918
15919 · Default: ""
15920
15921 Provider notes
15922 See below for notes on specific providers.
15923
15924 Owncloud
15925 Click on the settings cog in the bottom right of the page and this will
15926 show the WebDAV URL that rclone needs in the config step. It will look
15927 something like https://example.com/remote.php/webdav/.
15928
15929 Owncloud supports modified times using the X-OC-Mtime header.
15930
15931 Nextcloud
15932 This is configured in an identical way to Owncloud. Note that
15933 Nextcloud does not support streaming of files (rcat) whereas Owncloud
15934 does. This may be fixed (https://github.com/nextcloud/nextcloud-
15935 snap/issues/365) in the future.
15936
15937 Put.io
15938 put.io can be accessed in a read only way using webdav.
15939
15940 Configure the url as https://webdav.put.io and use your normal account
15941 username and password for user and pass. Set the vendor to other.
15942
15943 Your config file should end up looking like this:
15944
15945 [putio]
15946 type = webdav
15947 url = https://webdav.put.io
15948 vendor = other
15949 user = YourUserName
15950 pass = encryptedpassword
15951
15952 If you are using put.io with rclone mount then use the --read-only flag
15953 to signal to the OS that it can't write to the mount.
15954
15955 For more help see the put.io webdav docs (http://help.put.io/apps-and-
15956 integrations/ftp-and-webdav).
15957
15958 Sharepoint
15959 Rclone can be used with Sharepoint provided by OneDrive for Business or
15960 Office365 Education Accounts. This feature is only needed for a few of
15961 these Accounts, mostly Office365 Education ones. These accounts are
15962 sometimes not verified by the domain owner github#1975
15963 (https://github.com/ncw/rclone/issues/1975)
15964
15965 This means that these accounts can't be added using the official API
15966 (other Accounts should work with the “onedrive” option). However, it
15967 is possible to access them using webdav.
15968
15969 To use a sharepoint remote with rclone, add it like this: First, you
15970 need to get your remote's URL:
15971
15972 · Go here (https://onedrive.live.com/about/en-us/signin/) to open your
15973 OneDrive or to sign in
15974
15975 · Now take a look at your address bar, the URL should look like this:
15976 https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/_lay‐
15977 outs/15/onedrive.aspx
15978
15979 You'll only need this URL upto the email address. After that, you'll
15980 most likely want to add “/Documents”. That subdirectory contains the
15981 actual data stored on your OneDrive.
15982
15983 Add the remote to rclone like this: Configure the url as
15984 https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/Documents
15985 and use your normal account email and password for user and pass. If
15986 you have 2FA enabled, you have to generate an app password. Set the
15987 vendor to sharepoint.
15988
15989 Your config file should look like this:
15990
15991 [sharepoint]
15992 type = webdav
15993 url = https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/Documents
15994 vendor = other
15995 user = YourEmailAddress
15996 pass = encryptedpassword
15997
15998 dCache
15999 dCache (https://www.dcache.org/) is a storage system with WebDAV doors
16000 that support, beside basic and x509, authentication with Macaroons
16001 (https://www.dcache.org/manuals/workshop-2017-05-29-Umea/000-Final/anu‐
16002 pam_macaroons_v02.pdf) (bearer tokens).
16003
16004 Configure as normal using the other type. Don't enter a username or
16005 password, instead enter your Macaroon as the bearer_token.
16006
16007 The config will end up looking something like this.
16008
16009 [dcache]
16010 type = webdav
16011 url = https://dcache...
16012 vendor = other
16013 user =
16014 pass =
16015 bearer_token = your-macaroon
16016
16017 There is a script (https://github.com/sara-nl/GridScripts/blob/mas‐
16018 ter/get-macaroon) that obtains a Macaroon from a dCache WebDAV end‐
16019 point, and creates an rclone config file.
16020
16021 Yandex Disk
16022 Yandex Disk (https://disk.yandex.com) is a cloud storage solution cre‐
16023 ated by Yandex (https://yandex.com).
16024
16025 Yandex paths may be as deep as required, eg remote:directory/subdirec‐
16026 tory.
16027
16028 Here is an example of making a yandex configuration. First run
16029
16030 rclone config
16031
16032 This will guide you through an interactive setup process:
16033
16034 No remotes found - make a new one
16035 n) New remote
16036 s) Set configuration password
16037 n/s> n
16038 name> remote
16039 Type of storage to configure.
16040 Choose a number from below, or type in your own value
16041 1 / Amazon Drive
16042 \ "amazon cloud drive"
16043 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
16044 \ "s3"
16045 3 / Backblaze B2
16046 \ "b2"
16047 4 / Dropbox
16048 \ "dropbox"
16049 5 / Encrypt/Decrypt a remote
16050 \ "crypt"
16051 6 / Google Cloud Storage (this is not Google Drive)
16052 \ "google cloud storage"
16053 7 / Google Drive
16054 \ "drive"
16055 8 / Hubic
16056 \ "hubic"
16057 9 / Local Disk
16058 \ "local"
16059 10 / Microsoft OneDrive
16060 \ "onedrive"
16061 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
16062 \ "swift"
16063 12 / SSH/SFTP Connection
16064 \ "sftp"
16065 13 / Yandex Disk
16066 \ "yandex"
16067 Storage> 13
16068 Yandex Client Id - leave blank normally.
16069 client_id>
16070 Yandex Client Secret - leave blank normally.
16071 client_secret>
16072 Remote config
16073 Use auto config?
16074 * Say Y if not sure
16075 * Say N if you are working on a remote or headless machine
16076 y) Yes
16077 n) No
16078 y/n> y
16079 If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
16080 Log in and authorize rclone for access
16081 Waiting for code...
16082 Got code
16083 --------------------
16084 [remote]
16085 client_id =
16086 client_secret =
16087 token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","expiry":"2016-12-29T12:27:11.362788025Z"}
16088 --------------------
16089 y) Yes this is OK
16090 e) Edit this remote
16091 d) Delete this remote
16092 y/e/d> y
16093
16094 See the remote setup docs (https://rclone.org/remote_setup/) for how to
16095 set it up on a machine with no Internet browser available.
16096
16097 Note that rclone runs a webserver on your local machine to collect the
16098 token as returned from Yandex Disk. This only runs from the moment it
16099 opens your browser to the moment you get back the verification code.
16100 This is on http://127.0.0.1:53682/ and this it may require you to un‐
16101 block it temporarily if you are running a host firewall.
16102
16103 Once configured you can then use rclone like this,
16104
16105 See top level directories
16106
16107 rclone lsd remote:
16108
16109 Make a new directory
16110
16111 rclone mkdir remote:directory
16112
16113 List the contents of a directory
16114
16115 rclone ls remote:directory
16116
16117 Sync /home/local/directory to the remote path, deleting any excess
16118 files in the path.
16119
16120 rclone sync /home/local/directory remote:directory
16121
16122 Modified time
16123 Modified times are supported and are stored accurate to 1 ns in custom
16124 metadata called rclone_modified in RFC3339 with nanoseconds format.
16125
16126 MD5 checksums
16127 MD5 checksums are natively supported by Yandex Disk.
16128
16129 Emptying Trash
16130 If you wish to empty your trash you can use the rclone cleanup remote:
16131 command which will permanently delete all your trashed files. This
16132 command does not take any path arguments.
16133
16134 Quota information
16135 To view your current quota you can use the rclone about remote: command
16136 which will display your usage limit (quota) and the current usage.
16137
16138 Limitations
16139 When uploading very large files (bigger than about 5GB) you will need
16140 to increase the --timeout parameter. This is because Yandex pauses
16141 (perhaps to calculate the MD5SUM for the entire file) before returning
16142 confirmation that the file has been uploaded. The default handling of
16143 timeouts in rclone is to assume a 5 minute pause is an error and close
16144 the connection - you'll see net/http: timeout awaiting response headers
16145 errors in the logs if this is happening. Setting the timeout to twice
16146 the max size of file in GB should be enough, so if you want to upload a
16147 30GB file set a timeout of 2 * 30 = 60m, that is --timeout 60m.
16148
16149 Standard Options
16150 Here are the standard options specific to yandex (Yandex Disk).
16151
16152 –yandex-client-id
16153 Yandex Client Id Leave blank normally.
16154
16155 · Config: client_id
16156
16157 · Env Var: RCLONE_YANDEX_CLIENT_ID
16158
16159 · Type: string
16160
16161 · Default: ""
16162
16163 –yandex-client-secret
16164 Yandex Client Secret Leave blank normally.
16165
16166 · Config: client_secret
16167
16168 · Env Var: RCLONE_YANDEX_CLIENT_SECRET
16169
16170 · Type: string
16171
16172 · Default: ""
16173
16174 Advanced Options
16175 Here are the advanced options specific to yandex (Yandex Disk).
16176
16177 –yandex-unlink
16178 Remove existing public link to file/folder with link command rather
16179 than creating. Default is false, meaning link command will create or
16180 retrieve public link.
16181
16182 · Config: unlink
16183
16184 · Env Var: RCLONE_YANDEX_UNLINK
16185
16186 · Type: bool
16187
16188 · Default: false
16189
16190 Local Filesystem
16191 Local paths are specified as normal filesystem paths, eg /path/to/wher‐
16192 ever, so
16193
16194 rclone sync /home/source /tmp/destination
16195
16196 Will sync /home/source to /tmp/destination
16197
16198 These can be configured into the config file for consistencies sake,
16199 but it is probably easier not to.
16200
16201 Modified time
16202 Rclone reads and writes the modified time using an accuracy determined
16203 by the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Sec‐
16204 ond on OS X.
16205
16206 Filenames
16207 Filenames are expected to be encoded in UTF-8 on disk. This is the
16208 normal case for Windows and OS X.
16209
16210 There is a bit more uncertainty in the Linux world, but new distribu‐
16211 tions will have UTF-8 encoded files names. If you are using an old
16212 Linux filesystem with non UTF-8 file names (eg latin1) then you can use
16213 the convmv tool to convert the filesystem to UTF-8. This tool is
16214 available in most distributions' package managers.
16215
16216 If an invalid (non-UTF8) filename is read, the invalid characters will
16217 be replaced with the unicode replacement character, `�'. rclone will
16218 emit a debug message in this case (use -v to see), eg
16219
16220 Local file system at .: Replacing invalid UTF-8 characters in "gro\xdf"
16221
16222 Long paths on Windows
16223 Rclone handles long paths automatically, by converting all paths to
16224 long UNC paths (https://msdn.microsoft.com/en-us/library/windows/desk‐
16225 top/aa365247(v=vs.85).aspx#maxpath) which allows paths up to 32,767
16226 characters.
16227
16228 This is why you will see that your paths, for instance c:\files is con‐
16229 verted to the UNC path \\?\c:\files in the output, and \\server\share
16230 is converted to \\?\UNC\server\share.
16231
16232 However, in rare cases this may cause problems with buggy file system
16233 drivers like EncFS (https://github.com/ncw/rclone/issues/261). To dis‐
16234 able UNC conversion globally, add this to your .rclone.conf file:
16235
16236 [local]
16237 nounc = true
16238
16239 If you want to selectively disable UNC, you can add it to a separate
16240 entry like this:
16241
16242 [nounc]
16243 type = local
16244 nounc = true
16245
16246 And use rclone like this:
16247
16248 rclone copy c:\src nounc:z:\dst
16249
16250 This will use UNC paths on c:\src but not on z:\dst. Of course this
16251 will cause problems if the absolute path length of a file exceeds 258
16252 characters on z, so only use this option if you have to.
16253
16254 Symlinks / Junction points
16255 Normally rclone will ignore symlinks or junction points (which behave
16256 like symlinks under Windows).
16257
16258 If you supply --copy-links or -L then rclone will follow the symlink
16259 and copy the pointed to file or directory. Note that this flag is in‐
16260 compatible with -links / -l.
16261
16262 This flag applies to all commands.
16263
16264 For example, supposing you have a directory structure like this
16265
16266 $ tree /tmp/a
16267 /tmp/a
16268 ├── b -> ../b
16269 ├── expected -> ../expected
16270 ├── one
16271 └── two
16272 └── three
16273
16274 Then you can see the difference with and without the flag like this
16275
16276 $ rclone ls /tmp/a
16277 6 one
16278 6 two/three
16279
16280 and
16281
16282 $ rclone -L ls /tmp/a
16283 4174 expected
16284 6 one
16285 6 two/three
16286 6 b/two
16287 6 b/one
16288
16289 –links, -l
16290 Normally rclone will ignore symlinks or junction points (which behave
16291 like symlinks under Windows).
16292
16293 If you supply this flag then rclone will copy symbolic links from the
16294 local storage, and store them as text files, with a `.rclonelink' suf‐
16295 fix in the remote storage.
16296
16297 The text file will contain the target of the symbolic link (see exam‐
16298 ple).
16299
16300 This flag applies to all commands.
16301
16302 For example, supposing you have a directory structure like this
16303
16304 $ tree /tmp/a
16305 /tmp/a
16306 ├── file1 -> ./file4
16307 └── file2 -> /home/user/file3
16308
16309 Copying the entire directory with `-l'
16310
16311 $ rclone copyto -l /tmp/a/file1 remote:/tmp/a/
16312
16313 The remote files are created with a `.rclonelink' suffix
16314
16315 $ rclone ls remote:/tmp/a
16316 5 file1.rclonelink
16317 14 file2.rclonelink
16318
16319 The remote files will contain the target of the symbolic links
16320
16321 $ rclone cat remote:/tmp/a/file1.rclonelink
16322 ./file4
16323
16324 $ rclone cat remote:/tmp/a/file2.rclonelink
16325 /home/user/file3
16326
16327 Copying them back with `-l'
16328
16329 $ rclone copyto -l remote:/tmp/a/ /tmp/b/
16330
16331 $ tree /tmp/b
16332 /tmp/b
16333 ├── file1 -> ./file4
16334 └── file2 -> /home/user/file3
16335
16336 However, if copied back without `-l'
16337
16338 $ rclone copyto remote:/tmp/a/ /tmp/b/
16339
16340 $ tree /tmp/b
16341 /tmp/b
16342 ├── file1.rclonelink
16343 └── file2.rclonelink
16344
16345 Note that this flag is incompatible with -copy-links / -L.
16346
16347 Restricting filesystems with –one-file-system
16348 Normally rclone will recurse through filesystems as mounted.
16349
16350 However if you set --one-file-system or -x this tells rclone to stay in
16351 the filesystem specified by the root and not to recurse into different
16352 file systems.
16353
16354 For example if you have a directory hierarchy like this
16355
16356 root
16357 ├── disk1 - disk1 mounted on the root
16358 │ └── file3 - stored on disk1
16359 ├── disk2 - disk2 mounted on the root
16360 │ └── file4 - stored on disk12
16361 ├── file1 - stored on the root disk
16362 └── file2 - stored on the root disk
16363
16364 Using rclone --one-file-system copy root remote: will only copy file1
16365 and file2. Eg
16366
16367 $ rclone -q --one-file-system ls root
16368 0 file1
16369 0 file2
16370
16371 $ rclone -q ls root
16372 0 disk1/file3
16373 0 disk2/file4
16374 0 file1
16375 0 file2
16376
16377 NB Rclone (like most unix tools such as du, rsync and tar) treats a
16378 bind mount to the same device as being on the same filesystem.
16379
16380 NB This flag is only available on Unix based systems. On systems where
16381 it isn't supported (eg Windows) it will be ignored.
16382
16383 Standard Options
16384 Here are the standard options specific to local (Local Disk).
16385
16386 –local-nounc
16387 Disable UNC (long path names) conversion on Windows
16388
16389 · Config: nounc
16390
16391 · Env Var: RCLONE_LOCAL_NOUNC
16392
16393 · Type: string
16394
16395 · Default: ""
16396
16397 · Examples:
16398
16399 · “true”
16400
16401 · Disables long file names
16402
16403 Advanced Options
16404 Here are the advanced options specific to local (Local Disk).
16405
16406 –copy-links
16407 Follow symlinks and copy the pointed to item.
16408
16409 · Config: copy_links
16410
16411 · Env Var: RCLONE_LOCAL_COPY_LINKS
16412
16413 · Type: bool
16414
16415 · Default: false
16416
16417 –links
16418 Translate symlinks to/from regular files with a `.rclonelink' extension
16419
16420 · Config: links
16421
16422 · Env Var: RCLONE_LOCAL_LINKS
16423
16424 · Type: bool
16425
16426 · Default: false
16427
16428 –skip-links
16429 Don't warn about skipped symlinks. This flag disables warning messages
16430 on skipped symlinks or junction points, as you explicitly acknowledge
16431 that they should be skipped.
16432
16433 · Config: skip_links
16434
16435 · Env Var: RCLONE_LOCAL_SKIP_LINKS
16436
16437 · Type: bool
16438
16439 · Default: false
16440
16441 –local-no-unicode-normalization
16442 Don't apply unicode normalization to paths and filenames (Deprecated)
16443
16444 This flag is deprecated now. Rclone no longer normalizes unicode file
16445 names, but it compares them with unicode normalization in the sync rou‐
16446 tine instead.
16447
16448 · Config: no_unicode_normalization
16449
16450 · Env Var: RCLONE_LOCAL_NO_UNICODE_NORMALIZATION
16451
16452 · Type: bool
16453
16454 · Default: false
16455
16456 –local-no-check-updated
16457 Don't check to see if the files change during upload
16458
16459 Normally rclone checks the size and modification time of files as they
16460 are being uploaded and aborts with a message which starts “can't copy -
16461 source file is being updated” if the file changes during upload.
16462
16463 However on some file systems this modification time check may fail (eg
16464 Glusterfs #2206 (https://github.com/ncw/rclone/issues/2206)) so this
16465 check can be disabled with this flag.
16466
16467 · Config: no_check_updated
16468
16469 · Env Var: RCLONE_LOCAL_NO_CHECK_UPDATED
16470
16471 · Type: bool
16472
16473 · Default: false
16474
16475 –one-file-system
16476 Don't cross filesystem boundaries (unix/macOS only).
16477
16478 · Config: one_file_system
16479
16480 · Env Var: RCLONE_LOCAL_ONE_FILE_SYSTEM
16481
16482 · Type: bool
16483
16484 · Default: false
16485
16487 v1.47.0 - 2019-04-13
16488 · New backends
16489
16490 · Backend for Koofr cloud storage service. (jaKa)
16491
16492 · New Features
16493
16494 · Resume downloads if the reader fails in copy (Nick Craig-Wood)
16495
16496 · this means rclone will restart transfers if the source has an er‐
16497 ror
16498
16499 · this is most useful for downloads or cloud to cloud copies
16500
16501 · Use --fast-list for listing operations where it won't use more mem‐
16502 ory (Nick Craig-Wood)
16503
16504 · this should speed up the following operations on remotes which
16505 support ListR
16506
16507 · dedupe, serve restic lsf, ls, lsl, lsjson, lsd, md5sum, sha1sum,
16508 hashsum, size, delete, cat, settier
16509
16510 · use --disable ListR to get old behaviour if required
16511
16512 · Make --files-from traverse the destination unless --no-traverse is
16513 set (Nick Craig-Wood)
16514
16515 · this fixes --files-from with Google drive and excessive API use
16516 in general.
16517
16518 · Make server side copy account bytes and obey --max-transfer (Nick
16519 Craig-Wood)
16520
16521 · Add --create-empty-src-dirs flag and default to not creating empty
16522 dirs (ishuah)
16523
16524 · Add client side TLS/SSL flags --ca-cert/--client-cert/--client-key
16525 (Nick Craig-Wood)
16526
16527 · Implement --suffix-keep-extension for use with --suffix (Nick
16528 Craig-Wood)
16529
16530 · build:
16531
16532 · Switch to semvar compliant version tags to be go modules compli‐
16533 ant (Nick Craig-Wood)
16534
16535 · Update to use go1.12.x for the build (Nick Craig-Wood)
16536
16537 · serve dlna: Add connection manager service description to improve
16538 compatibility (Dan Walters)
16539
16540 · lsf: Add `e' format to show encrypted names and `o' for original
16541 IDs (Nick Craig-Wood)
16542
16543 · lsjson: Added --files-only and --dirs-only flags (calistri)
16544
16545 · rc: Implement operations/publiclink the equivalent of rclone link
16546 (Nick Craig-Wood)
16547
16548 · Bug Fixes
16549
16550 · accounting: Fix total ETA when --stats-unit bits is in effect (Nick
16551 Craig-Wood)
16552
16553 · Bash TAB completion
16554
16555 · Use private custom func to fix clash between rclone and kubectl
16556 (Nick Craig-Wood)
16557
16558 · Fix for remotes with underscores in their names (Six)
16559
16560 · Fix completion of remotes (Florian Gamböck)
16561
16562 · Fix autocompletion of remote paths with spaces (Danil Semelenov)
16563
16564 · serve dlna: Fix root XML service descriptor (Dan Walters)
16565
16566 · ncdu: Fix display corruption with Chinese characters (Nick
16567 Craig-Wood)
16568
16569 · Add SIGTERM to signals which run the exit handlers on unix (Nick
16570 Craig-Wood)
16571
16572 · rc: Reload filter when the options are set via the rc (Nick
16573 Craig-Wood)
16574
16575 · VFS / Mount
16576
16577 · Fix FreeBSD: Ignore Truncate if called with no readers and already
16578 the correct size (Nick Craig-Wood)
16579
16580 · Read directory and check for a file before mkdir (Nick Craig-Wood)
16581
16582 · Shorten the locking window for vfs/refresh (Nick Craig-Wood)
16583
16584 · Azure Blob
16585
16586 · Enable MD5 checksums when uploading files bigger than the “Cutoff”
16587 (Dr.Rx)
16588
16589 · Fix SAS URL support (Nick Craig-Wood)
16590
16591 · B2
16592
16593 · Allow manual configuration of backblaze downloadUrl (Vince)
16594
16595 · Ignore already_hidden error on remove (Nick Craig-Wood)
16596
16597 · Ignore malformed src_last_modified_millis (Nick Craig-Wood)
16598
16599 · Drive
16600
16601 · Add --skip-checksum-gphotos to ignore incorrect checksums on Google
16602 Photos (Nick Craig-Wood)
16603
16604 · Allow server side move/copy between different remotes. (Fionera)
16605
16606 · Add docs on team drives and --fast-list eventual consistency (Nes‐
16607 tar47)
16608
16609 · Fix imports of text files (Nick Craig-Wood)
16610
16611 · Fix range requests on 0 length files (Nick Craig-Wood)
16612
16613 · Fix creation of duplicates with server side copy (Nick Craig-Wood)
16614
16615 · Dropbox
16616
16617 · Retry blank errors to fix long listings (Nick Craig-Wood)
16618
16619 · FTP
16620
16621 · Add --ftp-concurrency to limit maximum number of connections (Nick
16622 Craig-Wood)
16623
16624 · Google Cloud Storage
16625
16626 · Fall back to default application credentials (marcintustin)
16627
16628 · Allow bucket policy only buckets (Nick Craig-Wood)
16629
16630 · HTTP
16631
16632 · Add --http-no-slash for websites with directories with no slashes
16633 (Nick Craig-Wood)
16634
16635 · Remove duplicates from listings (Nick Craig-Wood)
16636
16637 · Fix socket leak on 404 errors (Nick Craig-Wood)
16638
16639 · Jottacloud
16640
16641 · Fix token refresh (Sebastian Bünger)
16642
16643 · Add device registration (Oliver Heyme)
16644
16645 · Onedrive
16646
16647 · Implement graceful cancel of multipart uploads if rclone is inter‐
16648 rupted (Cnly)
16649
16650 · Always add trailing colon to path when addressing items, (Cnly)
16651
16652 · Return errors instead of panic for invalid uploads (Fabian Möller)
16653
16654 · S3
16655
16656 · Add support for “Glacier Deep Archive” storage class (Manu)
16657
16658 · Update Dreamhost endpoint (Nick Craig-Wood)
16659
16660 · Note incompatibility with CEPH Jewel (Nick Craig-Wood)
16661
16662 · SFTP
16663
16664 · Allow custom ssh client config (Alexandru Bumbacea)
16665
16666 · Swift
16667
16668 · Obey Retry-After to enable OVH restore from cold storage (Nick
16669 Craig-Wood)
16670
16671 · Work around token expiry on CEPH (Nick Craig-Wood)
16672
16673 · WebDAV
16674
16675 · Allow IsCollection property to be integer or boolean (Nick
16676 Craig-Wood)
16677
16678 · Fix race when creating directories (Nick Craig-Wood)
16679
16680 · Fix About/df when reading the available/total returns 0 (Nick
16681 Craig-Wood)
16682
16683 v1.46 - 2019-02-09
16684 · New backends
16685
16686 · Support Alibaba Cloud (Aliyun) OSS via the s3 backend (Nick
16687 Craig-Wood)
16688
16689 · New commands
16690
16691 · serve dlna: serves a remove via DLNA for the local network (ni‐
16692 colov)
16693
16694 · New Features
16695
16696 · copy, move: Restore deprecated --no-traverse flag (Nick Craig-Wood)
16697
16698 · This is useful for when transferring a small number of files into
16699 a large destination
16700
16701 · genautocomplete: Add remote path completion for bash completion
16702 (Christopher Peterson & Danil Semelenov)
16703
16704 · Buffer memory handling reworked to return memory to the OS better
16705 (Nick Craig-Wood)
16706
16707 · Buffer recycling library to replace sync.Pool
16708
16709 · Optionally use memory mapped memory for better memory shrinking
16710
16711 · Enable with --use-mmap if having memory problems - not default
16712 yet
16713
16714 · Parallelise reading of files specified by --files-from (Nick
16715 Craig-Wood)
16716
16717 · check: Add stats showing total files matched. (Dario Guzik)
16718
16719 · Allow rename/delete open files under Windows (Nick Craig-Wood)
16720
16721 · lsjson: Use exactly the correct number of decimal places in the
16722 seconds (Nick Craig-Wood)
16723
16724 · Add cookie support with cmdline switch --use-cookies for all HTTP
16725 based remotes (qip)
16726
16727 · Warn if --checksum is set but there are no hashes available (Nick
16728 Craig-Wood)
16729
16730 · Rework rate limiting (pacer) to be more accurate and allow bursting
16731 (Nick Craig-Wood)
16732
16733 · Improve error reporting for too many/few arguments in commands
16734 (Nick Craig-Wood)
16735
16736 · listremotes: Remove -l short flag as it conflicts with the new
16737 global flag (weetmuts)
16738
16739 · Make http serving with auth generate INFO messages on auth fail
16740 (Nick Craig-Wood)
16741
16742 · Bug Fixes
16743
16744 · Fix layout of stats (Nick Craig-Wood)
16745
16746 · Fix --progress crash under Windows Jenkins (Nick Craig-Wood)
16747
16748 · Fix transfer of google/onedrive docs by calling Rcat in Copy when
16749 size is -1 (Cnly)
16750
16751 · copyurl: Fix checking of --dry-run (Denis Skovpen)
16752
16753 · Mount
16754
16755 · Check that mountpoint and local directory to mount don't overlap
16756 (Nick Craig-Wood)
16757
16758 · Fix mount size under 32 bit Windows (Nick Craig-Wood)
16759
16760 · VFS
16761
16762 · Implement renaming of directories for backends without DirMove
16763 (Nick Craig-Wood)
16764
16765 · now all backends except b2 support renaming directories
16766
16767 · Implement --vfs-cache-max-size to limit the total size of the cache
16768 (Nick Craig-Wood)
16769
16770 · Add --dir-perms and --file-perms flags to set default permissions
16771 (Nick Craig-Wood)
16772
16773 · Fix deadlock on concurrent operations on a directory (Nick
16774 Craig-Wood)
16775
16776 · Fix deadlock between RWFileHandle.close and File.Remove (Nick
16777 Craig-Wood)
16778
16779 · Fix renaming/deleting open files with cache mode “writes” under
16780 Windows (Nick Craig-Wood)
16781
16782 · Fix panic on rename with --dry-run set (Nick Craig-Wood)
16783
16784 · Fix vfs/refresh with recurse=true needing the --fast-list flag
16785
16786 · Local
16787
16788 · Add support for -l/--links (symbolic link translation) (yair@uni‐
16789 corn)
16790
16791 · this works by showing links as link.rclonelink - see local back‐
16792 end docs for more info
16793
16794 · this errors if used with -L/--copy-links
16795
16796 · Fix renaming/deleting open files on Windows (Nick Craig-Wood)
16797
16798 · Crypt
16799
16800 · Check for maximum length before decrypting filename to fix panic
16801 (Garry McNulty)
16802
16803 · Azure Blob
16804
16805 · Allow building azureblob backend on *BSD (themylogin)
16806
16807 · Use the rclone HTTP client to support --dump headers, --tpslimit
16808 etc (Nick Craig-Wood)
16809
16810 · Use the s3 pacer for 0 delay in non error conditions (Nick
16811 Craig-Wood)
16812
16813 · Ignore directory markers (Nick Craig-Wood)
16814
16815 · Stop Mkdir attempting to create existing containers (Nick
16816 Craig-Wood)
16817
16818 · B2
16819
16820 · cleanup: will remove unfinished large files >24hrs old (Garry Mc‐
16821 Nulty)
16822
16823 · For a bucket limited application key check the bucket name (Nick
16824 Craig-Wood)
16825
16826 · before this, rclone would use the authorised bucket regardless of
16827 what you put on the command line
16828
16829 · Added --b2-disable-checksum flag (Wojciech Smigielski)
16830
16831 · this enables large files to be uploaded without a SHA-1 hash for
16832 speed reasons
16833
16834 · Drive
16835
16836 · Set default pacer to 100ms for 10 tps (Nick Craig-Wood)
16837
16838 · This fits the Google defaults much better and reduces the 403 er‐
16839 rors massively
16840
16841 · Add --drive-pacer-min-sleep and --drive-pacer-burst to control
16842 the pacer
16843
16844 · Improve ChangeNotify support for items with multiple parents (Fabi‐
16845 an Möller)
16846
16847 · Fix ListR for items with multiple parents - this fixes oddities
16848 with vfs/refresh (Fabian Möller)
16849
16850 · Fix using --drive-impersonate and appfolders (Nick Craig-Wood)
16851
16852 · Fix google docs in rclone mount for some (not all) applications
16853 (Nick Craig-Wood)
16854
16855 · Dropbox
16856
16857 · Retry-After support for Dropbox backend (Mathieu Carbou)
16858
16859 · FTP
16860
16861 · Wait for 60 seconds for a connection to Close then declare it dead
16862 (Nick Craig-Wood)
16863
16864 · helps with indefinite hangs on some FTP servers
16865
16866 · Google Cloud Storage
16867
16868 · Update google cloud storage endpoints (weetmuts)
16869
16870 · HTTP
16871
16872 · Add an example with username and password which is supported but
16873 wasn't documented (Nick Craig-Wood)
16874
16875 · Fix backend with --files-from and non-existent files (Nick
16876 Craig-Wood)
16877
16878 · Hubic
16879
16880 · Make error message more informative if authentication fails (Nick
16881 Craig-Wood)
16882
16883 · Jottacloud
16884
16885 · Resume and deduplication support (Oliver Heyme)
16886
16887 · Use token auth for all API requests Don't store password anymore
16888 (Sebastian Bünger)
16889
16890 · Add support for 2-factor authentification (Sebastian Bünger)
16891
16892 · Mega
16893
16894 · Implement v2 account login which fixes logins for newer Mega ac‐
16895 counts (Nick Craig-Wood)
16896
16897 · Return error if an unknown length file is attempted to be uploaded
16898 (Nick Craig-Wood)
16899
16900 · Add new error codes for better error reporting (Nick Craig-Wood)
16901
16902 · Onedrive
16903
16904 · Fix broken support for “shared with me” folders (Alex Chen)
16905
16906 · Fix root ID not normalised (Cnly)
16907
16908 · Return err instead of panic on unknown-sized uploads (Cnly)
16909
16910 · Qingstor
16911
16912 · Fix go routine leak on multipart upload errors (Nick Craig-Wood)
16913
16914 · Add upload chunk size/concurrency/cutoff control (Nick Craig-Wood)
16915
16916 · Default --qingstor-upload-concurrency to 1 to work around bug (Nick
16917 Craig-Wood)
16918
16919 · S3
16920
16921 · Implement --s3-upload-cutoff for single part uploads below this
16922 (Nick Craig-Wood)
16923
16924 · Change --s3-upload-concurrency default to 4 to increase perfomance
16925 (Nick Craig-Wood)
16926
16927 · Add --s3-bucket-acl to control bucket ACL (Nick Craig-Wood)
16928
16929 · Auto detect region for buckets on operation failure (Nick
16930 Craig-Wood)
16931
16932 · Add GLACIER storage class (William Cocker)
16933
16934 · Add Scaleway to s3 documentation (Rémy Léone)
16935
16936 · Add AWS endpoint eu-north-1 (weetmuts)
16937
16938 · SFTP
16939
16940 · Add support for PEM encrypted private keys (Fabian Möller)
16941
16942 · Add option to force the usage of an ssh-agent (Fabian Möller)
16943
16944 · Perform environment variable expansion on key-file (Fabian Möller)
16945
16946 · Fix rmdir on Windows based servers (eg CrushFTP) (Nick Craig-Wood)
16947
16948 · Fix rmdir deleting directory contents on some SFTP servers (Nick
16949 Craig-Wood)
16950
16951 · Fix error on dangling symlinks (Nick Craig-Wood)
16952
16953 · Swift
16954
16955 · Add --swift-no-chunk to disable segmented uploads in rcat/mount
16956 (Nick Craig-Wood)
16957
16958 · Introduce application credential auth support (kayrus)
16959
16960 · Fix memory usage by slimming Object (Nick Craig-Wood)
16961
16962 · Fix extra requests on upload (Nick Craig-Wood)
16963
16964 · Fix reauth on big files (Nick Craig-Wood)
16965
16966 · Union
16967
16968 · Fix poll-interval not working (Nick Craig-Wood)
16969
16970 · WebDAV
16971
16972 · Support About which means rclone mount will show the correct disk
16973 size (Nick Craig-Wood)
16974
16975 · Support MD5 and SHA1 hashes with Owncloud and Nextcloud (Nick
16976 Craig-Wood)
16977
16978 · Fail soft on time parsing errors (Nick Craig-Wood)
16979
16980 · Fix infinite loop on failed directory creation (Nick Craig-Wood)
16981
16982 · Fix identification of directories for Bitrix Site Manager (Nick
16983 Craig-Wood)
16984
16985 · Fix upload of 0 length files on some servers (Nick Craig-Wood)
16986
16987 · Fix if MKCOL fails with 423 Locked assume the directory exists
16988 (Nick Craig-Wood)
16989
16990 v1.45 - 2018-11-24
16991 · New backends
16992
16993 · The Yandex backend was re-written - see below for details (Sebas‐
16994 tian Bünger)
16995
16996 · New commands
16997
16998 · rcd: New command just to serve the remote control API (Nick
16999 Craig-Wood)
17000
17001 · New Features
17002
17003 · The remote control API (rc) was greatly expanded to allow full con‐
17004 trol over rclone (Nick Craig-Wood)
17005
17006 · sensitive operations require authorization or the --rc-no-auth
17007 flag
17008
17009 · config/* operations to configure rclone
17010
17011 · options/* for reading/setting command line flags
17012
17013 · operations/* for all low level operations, eg copy file, list di‐
17014 rectory
17015
17016 · sync/* for sync, copy and move
17017
17018 · --rc-files flag to serve files on the rc http server
17019
17020 · this is for building web native GUIs for rclone
17021
17022 · Optionally serving objects on the rc http server
17023
17024 · Ensure rclone fails to start up if the --rc port is in use al‐
17025 ready
17026
17027 · See the rc docs (https://rclone.org/rc/) for more info
17028
17029 · sync/copy/move
17030
17031 · Make --files-from only read the objects specified and don't scan
17032 directories (Nick Craig-Wood)
17033
17034 · This is a huge speed improvement for destinations with lots of
17035 files
17036
17037 · filter: Add --ignore-case flag (Nick Craig-Wood)
17038
17039 · ncdu: Add remove function (`d' key) (Henning Surmeier)
17040
17041 · rc command
17042
17043 · Add --json flag for structured JSON input (Nick Craig-Wood)
17044
17045 · Add --user and --pass flags and interpret --rc-user, --rc-pass,
17046 --rc-addr (Nick Craig-Wood)
17047
17048 · build
17049
17050 · Require go1.8 or later for compilation (Nick Craig-Wood)
17051
17052 · Enable softfloat on MIPS arch (Scott Edlund)
17053
17054 · Integration test framework revamped with a better report and bet‐
17055 ter retries (Nick Craig-Wood)
17056
17057 · Bug Fixes
17058
17059 · cmd: Make –progress update the stats correctly at the end (Nick
17060 Craig-Wood)
17061
17062 · config: Create config directory on save if it is missing (Nick
17063 Craig-Wood)
17064
17065 · dedupe: Check for existing filename before renaming a dupe file
17066 (ssaqua)
17067
17068 · move: Don't create directories with –dry-run (Nick Craig-Wood)
17069
17070 · operations: Fix Purge and Rmdirs when dir is not the root (Nick
17071 Craig-Wood)
17072
17073 · serve http/webdav/restic: Ensure rclone exits if the port is in use
17074 (Nick Craig-Wood)
17075
17076 · Mount
17077
17078 · Make --volname work for Windows and macOS (Nick Craig-Wood)
17079
17080 · Azure Blob
17081
17082 · Avoid context deadline exceeded error by setting a large TryTimeout
17083 value (brused27)
17084
17085 · Fix erroneous Rmdir error “directory not empty” (Nick Craig-Wood)
17086
17087 · Wait for up to 60s to create a just deleted container (Nick
17088 Craig-Wood)
17089
17090 · Dropbox
17091
17092 · Add dropbox impersonate support (Jake Coggiano)
17093
17094 · Jottacloud
17095
17096 · Fix bug in --fast-list handing of empty folders (albertony)
17097
17098 · Opendrive
17099
17100 · Fix transfer of files with + and & in (Nick Craig-Wood)
17101
17102 · Fix retries of upload chunks (Nick Craig-Wood)
17103
17104 · S3
17105
17106 · Set ACL for server side copies to that provided by the user (Nick
17107 Craig-Wood)
17108
17109 · Fix role_arn, credential_source, ... (Erik Swanson)
17110
17111 · Add config info for Wasabi's US-West endpoint (Henry Ptasinski)
17112
17113 · SFTP
17114
17115 · Ensure file hash checking is really disabled (Jon Fautley)
17116
17117 · Swift
17118
17119 · Add pacer for retries to make swift more reliable (Nick Craig-Wood)
17120
17121 · WebDAV
17122
17123 · Add Content-Type to PUT requests (Nick Craig-Wood)
17124
17125 · Fix config parsing so --webdav-user and --webdav-pass flags work
17126 (Nick Craig-Wood)
17127
17128 · Add RFC3339 date format (Ralf Hemberger)
17129
17130 · Yandex
17131
17132 · The yandex backend was re-written (Sebastian Bünger)
17133
17134 · This implements low level retries (Sebastian Bünger)
17135
17136 · Copy, Move, DirMove, PublicLink and About optional interfaces
17137 (Sebastian Bünger)
17138
17139 · Improved general error handling (Sebastian Bünger)
17140
17141 · Removed ListR for now due to inconsistent behaviour (Sebastian
17142 Bünger)
17143
17144 v1.44 - 2018-10-15
17145 · New commands
17146
17147 · serve ftp: Add ftp server (Antoine GIRARD)
17148
17149 · settier: perform storage tier changes on supported remotes
17150 (sandeepkru)
17151
17152 · New Features
17153
17154 · Reworked command line help
17155
17156 · Make default help less verbose (Nick Craig-Wood)
17157
17158 · Split flags up into global and backend flags (Nick Craig-Wood)
17159
17160 · Implement specialised help for flags and backends (Nick
17161 Craig-Wood)
17162
17163 · Show URL of backend help page when starting config (Nick
17164 Craig-Wood)
17165
17166 · stats: Long names now split in center (Joanna Marek)
17167
17168 · Add –log-format flag for more control over log output (dcpu)
17169
17170 · rc: Add support for OPTIONS and basic CORS (frenos)
17171
17172 · stats: show FatalErrors and NoRetryErrors in stats (Cédric Connes)
17173
17174 · Bug Fixes
17175
17176 · Fix -P not ending with a new line (Nick Craig-Wood)
17177
17178 · config: don't create default config dir when user supplies –config
17179 (albertony)
17180
17181 · Don't print non-ASCII characters with –progress on windows (Nick
17182 Craig-Wood)
17183
17184 · Correct logs for excluded items (ssaqua)
17185
17186 · Mount
17187
17188 · Remove EXPERIMENTAL tags (Nick Craig-Wood)
17189
17190 · VFS
17191
17192 · Fix race condition detected by serve ftp tests (Nick Craig-Wood)
17193
17194 · Add vfs/poll-interval rc command (Fabian Möller)
17195
17196 · Enable rename for nearly all remotes using server side Move or Copy
17197 (Nick Craig-Wood)
17198
17199 · Reduce directory cache cleared by poll-interval (Fabian Möller)
17200
17201 · Remove EXPERIMENTAL tags (Nick Craig-Wood)
17202
17203 · Local
17204
17205 · Skip bad symlinks in dir listing with -L enabled (Cédric Connes)
17206
17207 · Preallocate files on Windows to reduce fragmentation (Nick
17208 Craig-Wood)
17209
17210 · Preallocate files on linux with fallocate(2) (Nick Craig-Wood)
17211
17212 · Cache
17213
17214 · Add cache/fetch rc function (Fabian Möller)
17215
17216 · Fix worker scale down (Fabian Möller)
17217
17218 · Improve performance by not sending info requests for cached chunks
17219 (dcpu)
17220
17221 · Fix error return value of cache/fetch rc method (Fabian Möller)
17222
17223 · Documentation fix for cache-chunk-total-size (Anagh Kumar Baranwal)
17224
17225 · Preserve leading / in wrapped remote path (Fabian Möller)
17226
17227 · Add plex_insecure option to skip certificate validation (Fabian
17228 Möller)
17229
17230 · Remove entries that no longer exist in the source (dcpu)
17231
17232 · Crypt
17233
17234 · Preserve leading / in wrapped remote path (Fabian Möller)
17235
17236 · Alias
17237
17238 · Fix handling of Windows network paths (Nick Craig-Wood)
17239
17240 · Azure Blob
17241
17242 · Add –azureblob-list-chunk parameter (Santiago Rodríguez)
17243
17244 · Implemented settier command support on azureblob remote. (sandeep‐
17245 kru)
17246
17247 · Work around SDK bug which causes errors for chunk-sized files (Nick
17248 Craig-Wood)
17249
17250 · Box
17251
17252 · Implement link sharing. (Sebastian Bünger)
17253
17254 · Drive
17255
17256 · Add –drive-import-formats - google docs can now be imported (Fabian
17257 Möller)
17258
17259 · Rewrite mime type and extension handling (Fabian Möller)
17260
17261 · Add document links (Fabian Möller)
17262
17263 · Add support for multipart document extensions (Fabian Möller)
17264
17265 · Add support for apps-script to json export (Fabian Möller)
17266
17267 · Fix escaped chars in documents during list (Fabian Möller)
17268
17269 · Add –drive-v2-download-min-size a workaround for slow downloads
17270 (Fabian Möller)
17271
17272 · Improve directory notifications in ChangeNotify (Fabian Möller)
17273
17274 · When listing team drives in config, continue on failure (Nick
17275 Craig-Wood)
17276
17277 · FTP
17278
17279 · Add a small pause after failed upload before deleting file (Nick
17280 Craig-Wood)
17281
17282 · Google Cloud Storage
17283
17284 · Fix service_account_file being ignored (Fabian Möller)
17285
17286 · Jottacloud
17287
17288 · Minor improvement in quota info (omit if unlimited) (albertony)
17289
17290 · Add –fast-list support (albertony)
17291
17292 · Add permanent delete support: –jottacloud-hard-delete (albertony)
17293
17294 · Add link sharing support (albertony)
17295
17296 · Fix handling of reserved characters. (Sebastian Bünger)
17297
17298 · Fix socket leak on Object.Remove (Nick Craig-Wood)
17299
17300 · Onedrive
17301
17302 · Rework to support Microsoft Graph (Cnly)
17303
17304 · NB this will require re-authenticating the remote
17305
17306 · Removed upload cutoff and always do session uploads (Oliver Heyme)
17307
17308 · Use single-part upload for empty files (Cnly)
17309
17310 · Fix new fields not saved when editing old config (Alex Chen)
17311
17312 · Fix sometimes special chars in filenames not replaced (Alex Chen)
17313
17314 · Ignore OneNote files by default (Alex Chen)
17315
17316 · Add link sharing support (jackyzy823)
17317
17318 · S3
17319
17320 · Use custom pacer, to retry operations when reasonable (Craig
17321 Miskell)
17322
17323 · Use configured server-side-encryption and storace class options
17324 when calling CopyObject() (Paul Kohout)
17325
17326 · Make –s3-v2-auth flag (Nick Craig-Wood)
17327
17328 · Fix v2 auth on files with spaces (Nick Craig-Wood)
17329
17330 · Union
17331
17332 · Implement union backend which reads from multiple backends (Felix
17333 Brucker)
17334
17335 · Implement optional interfaces (Move, DirMove, Copy etc) (Nick
17336 Craig-Wood)
17337
17338 · Fix ChangeNotify to support multiple remotes (Fabian Möller)
17339
17340 · Fix –backup-dir on union backend (Nick Craig-Wood)
17341
17342 · WebDAV
17343
17344 · Add another time format (Nick Craig-Wood)
17345
17346 · Add a small pause after failed upload before deleting file (Nick
17347 Craig-Wood)
17348
17349 · Add workaround for missing mtime (buergi)
17350
17351 · Sharepoint: Renew cookies after 12hrs (Henning Surmeier)
17352
17353 · Yandex
17354
17355 · Remove redundant nil checks (teresy)
17356
17357 v1.43.1 - 2018-09-07
17358 Point release to fix hubic and azureblob backends.
17359
17360 · Bug Fixes
17361
17362 · ncdu: Return error instead of log.Fatal in Show (Fabian Möller)
17363
17364 · cmd: Fix crash with –progress and –stats 0 (Nick Craig-Wood)
17365
17366 · docs: Tidy website display (Anagh Kumar Baranwal)
17367
17368 · Azure Blob:
17369
17370 · Fix multi-part uploads. (sandeepkru)
17371
17372 · Hubic
17373
17374 · Fix uploads (Nick Craig-Wood)
17375
17376 · Retry auth fetching if it fails to make hubic more reliable (Nick
17377 Craig-Wood)
17378
17379 v1.43 - 2018-09-01
17380 · New backends
17381
17382 · Jottacloud (Sebastian Bünger)
17383
17384 · New commands
17385
17386 · copyurl: copies a URL to a remote (Denis)
17387
17388 · New Features
17389
17390 · Reworked config for backends (Nick Craig-Wood)
17391
17392 · All backend config can now be supplied by command line, env var
17393 or config file
17394
17395 · Advanced section in the config wizard for the optional items
17396
17397 · A large step towards rclone backends being usable in other go
17398 software
17399
17400 · Allow on the fly remotes with :backend: syntax
17401
17402 · Stats revamp
17403
17404 · Add --progress/-P flag to show interactive progress (Nick
17405 Craig-Wood)
17406
17407 · Show the total progress of the sync in the stats (Nick
17408 Craig-Wood)
17409
17410 · Add --stats-one-line flag for single line stats (Nick Craig-Wood)
17411
17412 · Added weekday schedule into --bwlimit (Mateusz)
17413
17414 · lsjson: Add option to show the original object IDs (Fabian Möller)
17415
17416 · serve webdav: Make Content-Type without reading the file and add
17417 --etag-hash (Nick Craig-Wood)
17418
17419 · build
17420
17421 · Build macOS with native compiler (Nick Craig-Wood)
17422
17423 · Update to use go1.11 for the build (Nick Craig-Wood)
17424
17425 · rc
17426
17427 · Added core/stats to return the stats (reddi1)
17428
17429 · version --check: Prints the current release and beta versions (Nick
17430 Craig-Wood)
17431
17432 · Bug Fixes
17433
17434 · accounting
17435
17436 · Fix time to completion estimates (Nick Craig-Wood)
17437
17438 · Fix moving average speed for file stats (Nick Craig-Wood)
17439
17440 · config: Fix error reading password from piped input (Nick
17441 Craig-Wood)
17442
17443 · move: Fix --delete-empty-src-dirs flag to delete all empty dirs on
17444 move (ishuah)
17445
17446 · Mount
17447
17448 · Implement --daemon-timeout flag for OSXFUSE (Nick Craig-Wood)
17449
17450 · Fix mount --daemon not working with encrypted config (Alex Chen)
17451
17452 · Clip the number of blocks to 2^32-1 on macOS - fixes borg backup
17453 (Nick Craig-Wood)
17454
17455 · VFS
17456
17457 · Enable vfs-read-chunk-size by default (Fabian Möller)
17458
17459 · Add the vfs/refresh rc command (Fabian Möller)
17460
17461 · Add non recursive mode to vfs/refresh rc command (Fabian Möller)
17462
17463 · Try to seek buffer on read only files (Fabian Möller)
17464
17465 · Local
17466
17467 · Fix crash when deprecated --local-no-unicode-normalization is sup‐
17468 plied (Nick Craig-Wood)
17469
17470 · Fix mkdir error when trying to copy files to the root of a drive on
17471 windows (Nick Craig-Wood)
17472
17473 · Cache
17474
17475 · Fix nil pointer deref when using lsjson on cached directory (Nick
17476 Craig-Wood)
17477
17478 · Fix nil pointer deref for occasional crash on playback (Nick
17479 Craig-Wood)
17480
17481 · Crypt
17482
17483 · Fix accounting when checking hashes on upload (Nick Craig-Wood)
17484
17485 · Amazon Cloud Drive
17486
17487 · Make very clear in the docs that rclone has no ACD keys (Nick
17488 Craig-Wood)
17489
17490 · Azure Blob
17491
17492 · Add connection string and SAS URL auth (Nick Craig-Wood)
17493
17494 · List the container to see if it exists (Nick Craig-Wood)
17495
17496 · Port new Azure Blob Storage SDK (sandeepkru)
17497
17498 · Added blob tier, tier between Hot, Cool and Archive. (sandeepkru)
17499
17500 · Remove leading / from paths (Nick Craig-Wood)
17501
17502 · B2
17503
17504 · Support Application Keys (Nick Craig-Wood)
17505
17506 · Remove leading / from paths (Nick Craig-Wood)
17507
17508 · Box
17509
17510 · Fix upload of > 2GB files on 32 bit platforms (Nick Craig-Wood)
17511
17512 · Make --box-commit-retries flag defaulting to 100 to fix large up‐
17513 loads (Nick Craig-Wood)
17514
17515 · Drive
17516
17517 · Add --drive-keep-revision-forever flag (lewapm)
17518
17519 · Handle gdocs when filtering file names in list (Fabian Möller)
17520
17521 · Support using --fast-list for large speedups (Fabian Möller)
17522
17523 · FTP
17524
17525 · Fix Put mkParentDir failed: 521 for BunnyCDN (Nick Craig-Wood)
17526
17527 · Google Cloud Storage
17528
17529 · Fix index out of range error with --fast-list (Nick Craig-Wood)
17530
17531 · Jottacloud
17532
17533 · Fix MD5 error check (Oliver Heyme)
17534
17535 · Handle empty time values (Martin Polden)
17536
17537 · Calculate missing MD5s (Oliver Heyme)
17538
17539 · Docs, fixes and tests for MD5 calculation (Nick Craig-Wood)
17540
17541 · Add optional MimeTyper interface. (Sebastian Bünger)
17542
17543 · Implement optional About interface (for df support). (Sebastian
17544 Bünger)
17545
17546 · Mega
17547
17548 · Wait for events instead of arbitrary sleeping (Nick Craig-Wood)
17549
17550 · Add --mega-hard-delete flag (Nick Craig-Wood)
17551
17552 · Fix failed logins with upper case chars in email (Nick Craig-Wood)
17553
17554 · Onedrive
17555
17556 · Shared folder support (Yoni Jah)
17557
17558 · Implement DirMove (Cnly)
17559
17560 · Fix rmdir sometimes deleting directories with contents (Nick
17561 Craig-Wood)
17562
17563 · Pcloud
17564
17565 · Delete half uploaded files on upload error (Nick Craig-Wood)
17566
17567 · Qingstor
17568
17569 · Remove leading / from paths (Nick Craig-Wood)
17570
17571 · S3
17572
17573 · Fix index out of range error with --fast-list (Nick Craig-Wood)
17574
17575 · Add --s3-force-path-style (Nick Craig-Wood)
17576
17577 · Add support for KMS Key ID (bsteiss)
17578
17579 · Remove leading / from paths (Nick Craig-Wood)
17580
17581 · Swift
17582
17583 · Add storage_policy (Ruben Vandamme)
17584
17585 · Make it so just storage_url or auth_token can be overidden (Nick
17586 Craig-Wood)
17587
17588 · Fix server side copy bug for unusal file names (Nick Craig-Wood)
17589
17590 · Remove leading / from paths (Nick Craig-Wood)
17591
17592 · WebDAV
17593
17594 · Ensure we call MKCOL with a URL with a trailing / for QNAP interop
17595 (Nick Craig-Wood)
17596
17597 · If root ends with / then don't check if it is a file (Nick
17598 Craig-Wood)
17599
17600 · Don't accept redirects when reading metadata (Nick Craig-Wood)
17601
17602 · Add bearer token (Macaroon) support for dCache (Nick Craig-Wood)
17603
17604 · Document dCache and Macaroons (Onno Zweers)
17605
17606 · Sharepoint recursion with different depth (Henning)
17607
17608 · Attempt to remove failed uploads (Nick Craig-Wood)
17609
17610 · Yandex
17611
17612 · Fix listing/deleting files in the root (Nick Craig-Wood)
17613
17614 v1.42 - 2018-06-16
17615 · New backends
17616
17617 · OpenDrive (Oliver Heyme, Jakub Karlicek, ncw)
17618
17619 · New commands
17620
17621 · deletefile command (Filip Bartodziej)
17622
17623 · New Features
17624
17625 · copy, move: Copy single files directly, don't use --files-from
17626 work-around
17627
17628 · this makes them much more efficient
17629
17630 · Implement --max-transfer flag to quit transferring at a limit
17631
17632 · make exit code 8 for --max-transfer exceeded
17633
17634 · copy: copy empty source directories to destination (Ishuah Kariuki)
17635
17636 · check: Add --one-way flag (Kasper Byrdal Nielsen)
17637
17638 · Add siginfo handler for macOS for ctrl-T stats (kubatasiemski)
17639
17640 · rc
17641
17642 · add core/gc to run a garbage collection on demand
17643
17644 · enable go profiling by default on the --rc port
17645
17646 · return error from remote on failure
17647
17648 · lsf
17649
17650 · Add --absolute flag to add a leading / onto path names
17651
17652 · Add --csv flag for compliant CSV output
17653
17654 · Add `m' format specifier to show the MimeType
17655
17656 · Implement `i' format for showing object ID
17657
17658 · lsjson
17659
17660 · Add MimeType to the output
17661
17662 · Add ID field to output to show Object ID
17663
17664 · Add --retries-sleep flag (Benjamin Joseph Dag)
17665
17666 · Oauth tidy up web page and error handling (Henning Surmeier)
17667
17668 · Bug Fixes
17669
17670 · Password prompt output with --log-file fixed for unix (Filip Bar‐
17671 todziej)
17672
17673 · Calculate ModifyWindow each time on the fly to fix various problems
17674 (Stefan Breunig)
17675
17676 · Mount
17677
17678 · Only print “File.rename error” if there actually is an error (Ste‐
17679 fan Breunig)
17680
17681 · Delay rename if file has open writers instead of failing outright
17682 (Stefan Breunig)
17683
17684 · Ensure atexit gets run on interrupt
17685
17686 · macOS enhancements
17687
17688 · Make --noappledouble --noapplexattr
17689
17690 · Add --volname flag and remove special chars from it
17691
17692 · Make Get/List/Set/Remove xattr return ENOSYS for efficiency
17693
17694 · Make --daemon work for macOS without CGO
17695
17696 · VFS
17697
17698 · Add --vfs-read-chunk-size and --vfs-read-chunk-size-limit (Fabian
17699 Möller)
17700
17701 · Fix ChangeNotify for new or changed folders (Fabian Möller)
17702
17703 · Local
17704
17705 · Fix symlink/junction point directory handling under Windows
17706
17707 · NB you will need to add -L to your command line to copy files
17708 with reparse points
17709
17710 · Cache
17711
17712 · Add non cached dirs on notifications (Remus Bunduc)
17713
17714 · Allow root to be expired from rc (Remus Bunduc)
17715
17716 · Clean remaining empty folders from temp upload path (Remus Bunduc)
17717
17718 · Cache lists using batch writes (Remus Bunduc)
17719
17720 · Use secure websockets for HTTPS Plex addresses (John Clayton)
17721
17722 · Reconnect plex websocket on failures (Remus Bunduc)
17723
17724 · Fix panic when running without plex configs (Remus Bunduc)
17725
17726 · Fix root folder caching (Remus Bunduc)
17727
17728 · Crypt
17729
17730 · Check the crypted hash of files when uploading for extra data secu‐
17731 rity
17732
17733 · Dropbox
17734
17735 · Make Dropbox for business folders accessible using an initial / in
17736 the path
17737
17738 · Google Cloud Storage
17739
17740 · Low level retry all operations if necessary
17741
17742 · Google Drive
17743
17744 · Add --drive-acknowledge-abuse to download flagged files
17745
17746 · Add --drive-alternate-export to fix large doc export
17747
17748 · Don't attempt to choose Team Drives when using rclone config create
17749
17750 · Fix change list polling with team drives
17751
17752 · Fix ChangeNotify for folders (Fabian Möller)
17753
17754 · Fix about (and df on a mount) for team drives
17755
17756 · Onedrive
17757
17758 · Errorhandler for onedrive for business requests (Henning Surmeier)
17759
17760 · S3
17761
17762 · Adjust upload concurrency with --s3-upload-concurrency (themylogin)
17763
17764 · Fix --s3-chunk-size which was always using the minimum
17765
17766 · SFTP
17767
17768 · Add --ssh-path-override flag (Piotr Oleszczyk)
17769
17770 · Fix slow downloads for long latency connections
17771
17772 · Webdav
17773
17774 · Add workarounds for biz.mail.ru
17775
17776 · Ignore Reason-Phrase in status line to fix 4shared (Rodrigo)
17777
17778 · Better error message generation
17779
17780 v1.41 - 2018-04-28
17781 · New backends
17782
17783 · Mega support added
17784
17785 · Webdav now supports SharePoint cookie authentication (hensur)
17786
17787 · New commands
17788
17789 · link: create public link to files and folders (Stefan Breunig)
17790
17791 · about: gets quota info from a remote (a-roussos, ncw)
17792
17793 · hashsum: a generic tool for any hash to produce md5sum like output
17794
17795 · New Features
17796
17797 · lsd: Add -R flag and fix and update docs for all ls commands
17798
17799 · ncdu: added a “refresh” key - CTRL-L (Keith Goldfarb)
17800
17801 · serve restic: Add append-only mode (Steve Kriss)
17802
17803 · serve restic: Disallow overwriting files in append-only mode
17804 (Alexander Neumann)
17805
17806 · serve restic: Print actual listener address (Matt Holt)
17807
17808 · size: Add –json flag (Matthew Holt)
17809
17810 · sync: implement –ignore-errors (Mateusz Pabian)
17811
17812 · dedupe: Add dedupe largest functionality (Richard Yang)
17813
17814 · fs: Extend SizeSuffix to include TB and PB for rclone about
17815
17816 · fs: add –dump goroutines and –dump openfiles for debugging
17817
17818 · rc: implement core/memstats to print internal memory usage info
17819
17820 · rc: new call rc/pid (Michael P. Dubner)
17821
17822 · Compile
17823
17824 · Drop support for go1.6
17825
17826 · Release
17827
17828 · Fix make tarball (Chih-Hsuan Yen)
17829
17830 · Bug Fixes
17831
17832 · filter: fix –min-age and –max-age together check
17833
17834 · fs: limit MaxIdleConns and MaxIdleConnsPerHost in transport
17835
17836 · lsd,lsf: make sure all times we output are in local time
17837
17838 · rc: fix setting bwlimit to unlimited
17839
17840 · rc: take note of the –rc-addr flag too as per the docs
17841
17842 · Mount
17843
17844 · Use About to return the correct disk total/used/free (eg in df)
17845
17846 · Set --attr-timeout default to 1s - fixes:
17847
17848 · rclone using too much memory
17849
17850 · rclone not serving files to samba
17851
17852 · excessive time listing directories
17853
17854 · Fix df -i (upstream fix)
17855
17856 · VFS
17857
17858 · Filter files . and .. from directory listing
17859
17860 · Only make the VFS cache if –vfs-cache-mode > Off
17861
17862 · Local
17863
17864 · Add –local-no-check-updated to disable updated file checks
17865
17866 · Retry remove on Windows sharing violation error
17867
17868 · Cache
17869
17870 · Flush the memory cache after close
17871
17872 · Purge file data on notification
17873
17874 · Always forget parent dir for notifications
17875
17876 · Integrate with Plex websocket
17877
17878 · Add rc cache/stats (seuffert)
17879
17880 · Add info log on notification
17881
17882 · Box
17883
17884 · Fix failure reading large directories - parse file/directory size
17885 as float
17886
17887 · Dropbox
17888
17889 · Fix crypt+obfuscate on dropbox
17890
17891 · Fix repeatedly uploading the same files
17892
17893 · FTP
17894
17895 · Work around strange response from box FTP server
17896
17897 · More workarounds for FTP servers to fix mkParentDir error
17898
17899 · Fix no error on listing non-existent directory
17900
17901 · Google Cloud Storage
17902
17903 · Add service_account_credentials (Matt Holt)
17904
17905 · Detect bucket presence by listing it - minimises permissions needed
17906
17907 · Ignore zero length directory markers
17908
17909 · Google Drive
17910
17911 · Add service_account_credentials (Matt Holt)
17912
17913 · Fix directory move leaving a hardlinked directory behind
17914
17915 · Return proper google errors when Opening files
17916
17917 · When initialized with a filepath, optional features used incorrect
17918 root path (Stefan Breunig)
17919
17920 · HTTP
17921
17922 · Fix sync for servers which don't return Content-Length in HEAD
17923
17924 · Onedrive
17925
17926 · Add QuickXorHash support for OneDrive for business
17927
17928 · Fix socket leak in multipart session upload
17929
17930 · S3
17931
17932 · Look in S3 named profile files for credentials
17933
17934 · Add --s3-disable-checksum to disable checksum uploading (Chris Re‐
17935 dekop)
17936
17937 · Hierarchical configuration support (Giri Badanahatti)
17938
17939 · Add in config for all the supported S3 providers
17940
17941 · Add One Zone Infrequent Access storage class (Craig Rachel)
17942
17943 · Add –use-server-modtime support (Peter Baumgartner)
17944
17945 · Add –s3-chunk-size option to control multipart uploads
17946
17947 · Ignore zero length directory markers
17948
17949 · SFTP
17950
17951 · Update docs to match code, fix typos and clarify disable_hashcheck
17952 prompt (Michael G. Noll)
17953
17954 · Update docs with Synology quirks
17955
17956 · Fail soft with a debug on hash failure
17957
17958 · Swift
17959
17960 · Add –use-server-modtime support (Peter Baumgartner)
17961
17962 · Webdav
17963
17964 · Support SharePoint cookie authentication (hensur)
17965
17966 · Strip leading and trailing / off root
17967
17968 v1.40 - 2018-03-19
17969 · New backends
17970
17971 · Alias backend to create aliases for existing remote names (Fabian
17972 Möller)
17973
17974 · New commands
17975
17976 · lsf: list for parsing purposes (Jakub Tasiemski)
17977
17978 · by default this is a simple non recursive list of files and di‐
17979 rectories
17980
17981 · it can be configured to add more info in an easy to parse way
17982
17983 · serve restic: for serving a remote as a Restic REST endpoint
17984
17985 · This enables restic to use any backends that rclone can access
17986
17987 · Thanks Alexander Neumann for help, patches and review
17988
17989 · rc: enable the remote control of a running rclone
17990
17991 · The running rclone must be started with –rc and related flags.
17992
17993 · Currently there is support for bwlimit, and flushing for mount
17994 and cache.
17995
17996 · New Features
17997
17998 · --max-delete flag to add a delete threshold (Bjørn Erik Pedersen)
17999
18000 · All backends now support RangeOption for ranged Open
18001
18002 · cat: Use RangeOption for limited fetches to make more efficient
18003
18004 · cryptcheck: make reading of nonce more efficient with RangeOption
18005
18006 · serve http/webdav/restic
18007
18008 · support SSL/TLS
18009
18010 · add --user --pass and --htpasswd for authentication
18011
18012 · copy/move: detect file size change during copy/move and abort
18013 transfer (ishuah)
18014
18015 · cryptdecode: added option to return encrypted file names. (ishuah)
18016
18017 · lsjson: add --encrypted to show encrypted name (Jakub Tasiemski)
18018
18019 · Add --stats-file-name-length to specify the printed file name
18020 length for stats (Will Gunn)
18021
18022 · Compile
18023
18024 · Code base was shuffled and factored
18025
18026 · backends moved into a backend directory
18027
18028 · large packages split up
18029
18030 · See the CONTRIBUTING.md doc for info as to what lives where now
18031
18032 · Update to using go1.10 as the default go version
18033
18034 · Implement daily full integration tests (https://pub.rclone.org/in‐
18035 tegration-tests/)
18036
18037 · Release
18038
18039 · Include a source tarball and sign it and the binaries
18040
18041 · Sign the git tags as part of the release process
18042
18043 · Add .deb and .rpm packages as part of the build
18044
18045 · Make a beta release for all branches on the main repo (but not pull
18046 requests)
18047
18048 · Bug Fixes
18049
18050 · config: fixes errors on non existing config by loading config file
18051 only on first access
18052
18053 · config: retry saving the config after failure (Mateusz)
18054
18055 · sync: when using --backup-dir don't delete files if we can't set
18056 their modtime
18057
18058 · this fixes odd behaviour with Dropbox and --backup-dir
18059
18060 · fshttp: fix idle timeouts for HTTP connections
18061
18062 · serve http: fix serving files with : in - fixes
18063
18064 · Fix --exclude-if-present to ignore directories which it doesn't
18065 have permission for (Iakov Davydov)
18066
18067 · Make accounting work properly with crypt and b2
18068
18069 · remove --no-traverse flag because it is obsolete
18070
18071 · Mount
18072
18073 · Add --attr-timeout flag to control attribute caching in kernel
18074
18075 · this now defaults to 0 which is correct but less efficient
18076
18077 · see the mount docs (/commands/rclone_mount/#attribute-caching)
18078 for more info
18079
18080 · Add --daemon flag to allow mount to run in the background (ishuah)
18081
18082 · Fix: Return ENOSYS rather than EIO on attempted link
18083
18084 · This fixes FileZilla accessing an rclone mount served over sftp.
18085
18086 · Fix setting modtime twice
18087
18088 · Mount tests now run on CI for Linux (mount & cmount)/Mac/Windows
18089
18090 · Many bugs fixed in the VFS layer - see below
18091
18092 · VFS
18093
18094 · Many fixes for --vfs-cache-mode writes and above
18095
18096 · Update cached copy if we know it has changed (fixes stale data)
18097
18098 · Clean path names before using them in the cache
18099
18100 · Disable cache cleaner if --vfs-cache-poll-interval=0
18101
18102 · Fill and clean the cache immediately on startup
18103
18104 · Fix Windows opening every file when it stats the file
18105
18106 · Fix applying modtime for an open Write Handle
18107
18108 · Fix creation of files when truncating
18109
18110 · Write 0 bytes when flushing unwritten handles to avoid race condi‐
18111 tions in FUSE
18112
18113 · Downgrade “poll-interval is not supported” message to Info
18114
18115 · Make OpenFile and friends return EINVAL if O_RDONLY and O_TRUNC
18116
18117 · Local
18118
18119 · Downgrade “invalid cross-device link: trying copy” to debug
18120
18121 · Make DirMove return fs.ErrorCantDirMove to allow fallback to Copy
18122 for cross device
18123
18124 · Fix race conditions updating the hashes
18125
18126 · Cache
18127
18128 · Add support for polling - cache will update when remote changes on
18129 supported backends
18130
18131 · Reduce log level for Plex api
18132
18133 · Fix dir cache issue
18134
18135 · Implement --cache-db-wait-time flag
18136
18137 · Improve efficiency with RangeOption and RangeSeek
18138
18139 · Fix dirmove with temp fs enabled
18140
18141 · Notify vfs when using temp fs
18142
18143 · Offline uploading
18144
18145 · Remote control support for path flushing
18146
18147 · Amazon cloud drive
18148
18149 · Rclone no longer has any working keys - disable integration tests
18150
18151 · Implement DirChangeNotify to notify cache/vfs/mount of changes
18152
18153 · Azureblob
18154
18155 · Don't check for bucket/container presense if listing was OK
18156
18157 · this makes rclone do one less request per invocation
18158
18159 · Improve accounting for chunked uploads
18160
18161 · Backblaze B2
18162
18163 · Don't check for bucket/container presense if listing was OK
18164
18165 · this makes rclone do one less request per invocation
18166
18167 · Box
18168
18169 · Improve accounting for chunked uploads
18170
18171 · Dropbox
18172
18173 · Fix custom oauth client parameters
18174
18175 · Google Cloud Storage
18176
18177 · Don't check for bucket/container presense if listing was OK
18178
18179 · this makes rclone do one less request per invocation
18180
18181 · Google Drive
18182
18183 · Migrate to api v3 (Fabian Möller)
18184
18185 · Add scope configuration and root folder selection
18186
18187 · Add --drive-impersonate for service accounts
18188
18189 · thanks to everyone who tested, explored and contributed docs
18190
18191 · Add --drive-use-created-date to use created date as modified date
18192 (nbuchanan)
18193
18194 · Request the export formats only when required
18195
18196 · This makes rclone quicker when there are no google docs
18197
18198 · Fix finding paths with latin1 chars (a workaround for a drive bug)
18199
18200 · Fix copying of a single Google doc file
18201
18202 · Fix --drive-auth-owner-only to look in all directories
18203
18204 · HTTP
18205
18206 · Fix handling of directories with & in
18207
18208 · Onedrive
18209
18210 · Removed upload cutoff and always do session uploads
18211
18212 · this stops the creation of multiple versions on business onedrive
18213
18214 · Overwrite object size value with real size when reading file.
18215 (Victor)
18216
18217 · this fixes oddities when onedrive misreports the size of images
18218
18219 · Pcloud
18220
18221 · Remove unused chunked upload flag and code
18222
18223 · Qingstor
18224
18225 · Don't check for bucket/container presense if listing was OK
18226
18227 · this makes rclone do one less request per invocation
18228
18229 · S3
18230
18231 · Support hashes for multipart files (Chris Redekop)
18232
18233 · Initial support for IBM COS (S3) (Giri Badanahatti)
18234
18235 · Update docs to discourage use of v2 auth with CEPH and others
18236
18237 · Don't check for bucket/container presense if listing was OK
18238
18239 · this makes rclone do one less request per invocation
18240
18241 · Fix server side copy and set modtime on files with + in
18242
18243 · SFTP
18244
18245 · Add option to disable remote hash check command execution (Jon
18246 Fautley)
18247
18248 · Add --sftp-ask-password flag to prompt for password when needed
18249 (Leo R. Lundgren)
18250
18251 · Add set_modtime configuration option
18252
18253 · Fix following of symlinks
18254
18255 · Fix reading config file outside of Fs setup
18256
18257 · Fix reading $USER in username fallback not $HOME
18258
18259 · Fix running under crontab - Use correct OS way of reading username
18260
18261 · Swift
18262
18263 · Fix refresh of authentication token
18264
18265 · in v1.39 a bug was introduced which ignored new tokens - this
18266 fixes it
18267
18268 · Fix extra HEAD transaction when uploading a new file
18269
18270 · Don't check for bucket/container presense if listing was OK
18271
18272 · this makes rclone do one less request per invocation
18273
18274 · Webdav
18275
18276 · Add new time formats to support mydrive.ch and others
18277
18278 v1.39 - 2017-12-23
18279 · New backends
18280
18281 · WebDAV
18282
18283 · tested with nextcloud, owncloud, put.io and others!
18284
18285 · Pcloud
18286
18287 · cache - wraps a cache around other backends (Remus Bunduc)
18288
18289 · useful in combination with mount
18290
18291 · NB this feature is in beta so use with care
18292
18293 · New commands
18294
18295 · serve command with subcommands:
18296
18297 · serve webdav: this implements a webdav server for any rclone re‐
18298 mote.
18299
18300 · serve http: command to serve a remote over HTTP
18301
18302 · config: add sub commands for full config file management
18303
18304 · create/delete/dump/edit/file/password/providers/show/update
18305
18306 · touch: to create or update the timestamp of a file (Jakub Tasiems‐
18307 ki)
18308
18309 · New Features
18310
18311 · curl install for rclone (Filip Bartodziej)
18312
18313 · –stats now shows percentage, size, rate and ETA in condensed form
18314 (Ishuah Kariuki)
18315
18316 · –exclude-if-present to exclude a directory if a file is present
18317 (Iakov Davydov)
18318
18319 · rmdirs: add –leave-root flag (lewpam)
18320
18321 · move: add –delete-empty-src-dirs flag to remove dirs after move
18322 (Ishuah Kariuki)
18323
18324 · Add –dump flag, introduce –dump requests, responses and remove
18325 –dump-auth, –dump-filters
18326
18327 · Obscure X-Auth-Token: from headers when dumping too
18328
18329 · Document and implement exit codes for different failure modes
18330 (Ishuah Kariuki)
18331
18332 · Compile
18333
18334 · Bug Fixes
18335
18336 · Retry lots more different types of errors to make multipart trans‐
18337 fers more reliable
18338
18339 · Save the config before asking for a token, fixes disappearing oauth
18340 config
18341
18342 · Warn the user if –include and –exclude are used together (Ernest
18343 Borowski)
18344
18345 · Fix duplicate files (eg on Google drive) causing spurious copies
18346
18347 · Allow trailing and leading whitespace for passwords (Jason Rose)
18348
18349 · ncdu: fix crashes on empty directories
18350
18351 · rcat: fix goroutine leak
18352
18353 · moveto/copyto: Fix to allow copying to the same name
18354
18355 · Mount
18356
18357 · –vfs-cache mode to make writes into mounts more reliable.
18358
18359 · this requires caching files on the disk (see –cache-dir)
18360
18361 · As this is a new feature, use with care
18362
18363 · Use sdnotify to signal systemd the mount is ready (Fabian Möller)
18364
18365 · Check if directory is not empty before mounting (Ernest Borowski)
18366
18367 · Local
18368
18369 · Add error message for cross file system moves
18370
18371 · Fix equality check for times
18372
18373 · Dropbox
18374
18375 · Rework multipart upload
18376
18377 · buffer the chunks when uploading large files so they can be re‐
18378 tried
18379
18380 · change default chunk size to 48MB now we are buffering them in
18381 memory
18382
18383 · retry every error after the first chunk is done successfully
18384
18385 · Fix error when renaming directories
18386
18387 · Swift
18388
18389 · Fix crash on bad authentication
18390
18391 · Google Drive
18392
18393 · Add service account support (Tim Cooijmans)
18394
18395 · S3
18396
18397 · Make it work properly with Digital Ocean Spaces (Andrew
18398 Starr-Bochicchio)
18399
18400 · Fix crash if a bad listing is received
18401
18402 · Add support for ECS task IAM roles (David Minor)
18403
18404 · Backblaze B2
18405
18406 · Fix multipart upload retries
18407
18408 · Fix –hard-delete to make it work 100% of the time
18409
18410 · Swift
18411
18412 · Allow authentication with storage URL and auth key (Giovanni Pizzi)
18413
18414 · Add new fields for swift configuration to support IBM Bluemix Swift
18415 (Pierre Carlson)
18416
18417 · Add OS_TENANT_ID and OS_USER_ID to config
18418
18419 · Allow configs with user id instead of user name
18420
18421 · Check if swift segments container exists before creating (John
18422 Leach)
18423
18424 · Fix memory leak in swift transfers (upstream fix)
18425
18426 · SFTP
18427
18428 · Add option to enable the use of aes128-cbc cipher (Jon Fautley)
18429
18430 · Amazon cloud drive
18431
18432 · Fix download of large files failing with “Only one auth mechanism
18433 allowed”
18434
18435 · crypt
18436
18437 · Option to encrypt directory names or leave them intact
18438
18439 · Implement DirChangeNotify (Fabian Möller)
18440
18441 · onedrive
18442
18443 · Add option to choose resourceURL during setup of OneDrive Business
18444 account if more than one is available for user
18445
18446 v1.38 - 2017-09-30
18447 · New backends
18448
18449 · Azure Blob Storage (thanks Andrei Dragomir)
18450
18451 · Box
18452
18453 · Onedrive for Business (thanks Oliver Heyme)
18454
18455 · QingStor from QingCloud (thanks wuyu)
18456
18457 · New commands
18458
18459 · rcat - read from standard input and stream upload
18460
18461 · tree - shows a nicely formatted recursive listing
18462
18463 · cryptdecode - decode crypted file names (thanks ishuah)
18464
18465 · config show - print the config file
18466
18467 · config file - print the config file location
18468
18469 · New Features
18470
18471 · Empty directories are deleted on sync
18472
18473 · dedupe - implement merging of duplicate directories
18474
18475 · check and cryptcheck made more consistent and use less memory
18476
18477 · cleanup for remaining remotes (thanks ishuah)
18478
18479 · --immutable for ensuring that files don't change (thanks Jacob Mc‐
18480 Namee)
18481
18482 · --user-agent option (thanks Alex McGrath Kraak)
18483
18484 · --disable flag to disable optional features
18485
18486 · --bind flag for choosing the local addr on outgoing connections
18487
18488 · Support for zsh auto-completion (thanks bpicode)
18489
18490 · Stop normalizing file names but do a normalized compare in sync
18491
18492 · Compile
18493
18494 · Update to using go1.9 as the default go version
18495
18496 · Remove snapd build due to maintenance problems
18497
18498 · Bug Fixes
18499
18500 · Improve retriable error detection which makes multipart uploads
18501 better
18502
18503 · Make check obey --ignore-size
18504
18505 · Fix bwlimit toggle in conjunction with schedules (thanks cbruegg)
18506
18507 · config ensures newly written config is on the same mount
18508
18509 · Local
18510
18511 · Revert to copy when moving file across file system boundaries
18512
18513 · --skip-links to suppress symlink warnings (thanks Zhiming Wang)
18514
18515 · Mount
18516
18517 · Re-use rcat internals to support uploads from all remotes
18518
18519 · Dropbox
18520
18521 · Fix “entry doesn't belong in directory” error
18522
18523 · Stop using deprecated API methods
18524
18525 · Swift
18526
18527 · Fix server side copy to empty container with --fast-list
18528
18529 · Google Drive
18530
18531 · Change the default for --drive-use-trash to true
18532
18533 · S3
18534
18535 · Set session token when using STS (thanks Girish Ramakrishnan)
18536
18537 · Glacier docs and error messages (thanks Jan Varho)
18538
18539 · Read 1000 (not 1024) items in dir listings to fix Wasabi
18540
18541 · Backblaze B2
18542
18543 · Fix SHA1 mismatch when downloading files with no SHA1
18544
18545 · Calculate missing hashes on the fly instead of spooling
18546
18547 · --b2-hard-delete to permanently delete (not hide) files (thanks
18548 John Papandriopoulos)
18549
18550 · Hubic
18551
18552 · Fix creating containers - no longer have to use the default con‐
18553 tainer
18554
18555 · Swift
18556
18557 · Optionally configure from a standard set of OpenStack environment
18558 vars
18559
18560 · Add endpoint_type config
18561
18562 · Google Cloud Storage
18563
18564 · Fix bucket creation to work with limited permission users
18565
18566 · SFTP
18567
18568 · Implement connection pooling for multiple ssh connections
18569
18570 · Limit new connections per second
18571
18572 · Add support for MD5 and SHA1 hashes where available (thanks Chris‐
18573 tian Brüggemann)
18574
18575 · HTTP
18576
18577 · Fix URL encoding issues
18578
18579 · Fix directories with : in
18580
18581 · Fix panic with URL encoded content
18582
18583 v1.37 - 2017-07-22
18584 · New backends
18585
18586 · FTP - thanks to Antonio Messina
18587
18588 · HTTP - thanks to Vasiliy Tolstov
18589
18590 · New commands
18591
18592 · rclone ncdu - for exploring a remote with a text based user inter‐
18593 face.
18594
18595 · rclone lsjson - for listing with a machine readable output
18596
18597 · rclone dbhashsum - to show Dropbox style hashes of files (local or
18598 Dropbox)
18599
18600 · New Features
18601
18602 · Implement –fast-list flag
18603
18604 · This allows remotes to list recursively if they can
18605
18606 · This uses less transactions (important if you pay for them)
18607
18608 · This may or may not be quicker
18609
18610 · This will use more memory as it has to hold the listing in memory
18611
18612 · –old-sync-method deprecated - the remaining uses are covered by
18613 –fast-list
18614
18615 · This involved a major re-write of all the listing code
18616
18617 · Add –tpslimit and –tpslimit-burst to limit transactions per second
18618
18619 · this is useful in conjuction with rclone mount to limit external
18620 apps
18621
18622 · Add –stats-log-level so can see –stats without -v
18623
18624 · Print password prompts to stderr - Hraban Luyat
18625
18626 · Warn about duplicate files when syncing
18627
18628 · Oauth improvements
18629
18630 · allow auth_url and token_url to be set in the config file
18631
18632 · Print redirection URI if using own credentials.
18633
18634 · Don't Mkdir at the start of sync to save transactions
18635
18636 · Compile
18637
18638 · Update build to go1.8.3
18639
18640 · Require go1.6 for building rclone
18641
18642 · Compile 386 builds with “GO386=387” for maximum compatibility
18643
18644 · Bug Fixes
18645
18646 · Fix menu selection when no remotes
18647
18648 · Config saving reworked to not kill the file if disk gets full
18649
18650 · Don't delete remote if name does not change while renaming
18651
18652 · moveto, copyto: report transfers and checks as per move and copy
18653
18654 · Local
18655
18656 · Add –local-no-unicode-normalization flag - Bob Potter
18657
18658 · Mount
18659
18660 · Now supported on Windows using cgofuse and WinFsp - thanks to Bill
18661 Zissimopoulos for much help
18662
18663 · Compare checksums on upload/download via FUSE
18664
18665 · Unmount when program ends with SIGINT (Ctrl+C) or SIGTERM - Jérôme
18666 Vizcaino
18667
18668 · On read only open of file, make open pending until first read
18669
18670 · Make –read-only reject modify operations
18671
18672 · Implement ModTime via FUSE for remotes that support it
18673
18674 · Allow modTime to be changed even before all writers are closed
18675
18676 · Fix panic on renames
18677
18678 · Fix hang on errored upload
18679
18680 · Crypt
18681
18682 · Report the name:root as specified by the user
18683
18684 · Add an “obfuscate” option for filename encryption - Stephen Harris
18685
18686 · Amazon Drive
18687
18688 · Fix initialization order for token renewer
18689
18690 · Remove revoked credentials, allow oauth proxy config and update
18691 docs
18692
18693 · B2
18694
18695 · Reduce minimum chunk size to 5MB
18696
18697 · Drive
18698
18699 · Add team drive support
18700
18701 · Reduce bandwidth by adding fields for partial responses - Martin
18702 Kristensen
18703
18704 · Implement –drive-shared-with-me flag to view shared with me files -
18705 Danny Tsai
18706
18707 · Add –drive-trashed-only to read only the files in the trash
18708
18709 · Remove obsolete –drive-full-list
18710
18711 · Add missing seek to start on retries of chunked uploads
18712
18713 · Fix stats accounting for upload
18714
18715 · Convert / in names to a unicode equivalent (/)
18716
18717 · Poll for Google Drive changes when mounted
18718
18719 · OneDrive
18720
18721 · Fix the uploading of files with spaces
18722
18723 · Fix initialization order for token renewer
18724
18725 · Display speeds accurately when uploading - Yoni Jah
18726
18727 · Swap to using http://localhost:53682/ as redirect URL - Michael
18728 Ledin
18729
18730 · Retry on token expired error, reset upload body on retry - Yoni Jah
18731
18732 · Google Cloud Storage
18733
18734 · Add ability to specify location and storage class via config and
18735 command line - thanks gdm85
18736
18737 · Create container if necessary on server side copy
18738
18739 · Increase directory listing chunk to 1000 to increase performance
18740
18741 · Obtain a refresh token for GCS - Steven Lu
18742
18743 · Yandex
18744
18745 · Fix the name reported in log messages (was empty)
18746
18747 · Correct error return for listing empty directory
18748
18749 · Dropbox
18750
18751 · Rewritten to use the v2 API
18752
18753 · Now supports ModTime
18754
18755 · Can only set by uploading the file again
18756
18757 · If you uploaded with an old rclone, rclone may upload every‐
18758 thing again
18759
18760 · Use --size-only or --checksum to avoid this
18761
18762 · Now supports the Dropbox content hashing scheme
18763
18764 · Now supports low level retries
18765
18766 · S3
18767
18768 · Work around eventual consistency in bucket creation
18769
18770 · Create container if necessary on server side copy
18771
18772 · Add us-east-2 (Ohio) and eu-west-2 (London) S3 regions - Zahiar
18773 Ahmed
18774
18775 · Swift, Hubic
18776
18777 · Fix zero length directory markers showing in the subdirectory list‐
18778 ing
18779
18780 · this caused lots of duplicate transfers
18781
18782 · Fix paged directory listings
18783
18784 · this caused duplicate directory errors
18785
18786 · Create container if necessary on server side copy
18787
18788 · Increase directory listing chunk to 1000 to increase performance
18789
18790 · Make sensible error if the user forgets the container
18791
18792 · SFTP
18793
18794 · Add support for using ssh key files
18795
18796 · Fix under Windows
18797
18798 · Fix ssh agent on Windows
18799
18800 · Adapt to latest version of library - Igor Kharin
18801
18802 v1.36 - 2017-03-18
18803 · New Features
18804
18805 · SFTP remote (Jack Schmidt)
18806
18807 · Re-implement sync routine to work a directory at a time reducing
18808 memory usage
18809
18810 · Logging revamped to be more inline with rsync - now much quieter *
18811 -v only shows transfers * -vv is for full debug * –syslog to log to
18812 syslog on capable platforms
18813
18814 · Implement –backup-dir and –suffix
18815
18816 · Implement –track-renames (initial implementation by Bjørn Erik Ped‐
18817 ersen)
18818
18819 · Add time-based bandwidth limits (Lukas Loesche)
18820
18821 · rclone cryptcheck: checks integrity of crypt remotes
18822
18823 · Allow all config file variables and options to be set from environ‐
18824 ment variables
18825
18826 · Add –buffer-size parameter to control buffer size for copy
18827
18828 · Make –delete-after the default
18829
18830 · Add –ignore-checksum flag (fixed by Hisham Zarka)
18831
18832 · rclone check: Add –download flag to check all the data, not just
18833 hashes
18834
18835 · rclone cat: add –head, –tail, –offset, –count and –discard
18836
18837 · rclone config: when choosing from a list, allow the value to be en‐
18838 tered too
18839
18840 · rclone config: allow rename and copy of remotes
18841
18842 · rclone obscure: for generating encrypted passwords for rclone's
18843 config (T.C. Ferguson)
18844
18845 · Comply with XDG Base Directory specification (Dario Giovannetti)
18846
18847 · this moves the default location of the config file in a backwards
18848 compatible way
18849
18850 · Release changes
18851
18852 · Ubuntu snap support (Dedsec1)
18853
18854 · Compile with go 1.8
18855
18856 · MIPS/Linux big and little endian support
18857
18858 · Bug Fixes
18859
18860 · Fix copyto copying things to the wrong place if the destination dir
18861 didn't exist
18862
18863 · Fix parsing of remotes in moveto and copyto
18864
18865 · Fix –delete-before deleting files on copy
18866
18867 · Fix –files-from with an empty file copying everything
18868
18869 · Fix sync: don't update mod times if –dry-run set
18870
18871 · Fix MimeType propagation
18872
18873 · Fix filters to add ** rules to directory rules
18874
18875 · Local
18876
18877 · Implement -L, –copy-links flag to allow rclone to follow symlinks
18878
18879 · Open files in write only mode so rclone can write to an rclone
18880 mount
18881
18882 · Fix unnormalised unicode causing problems reading directories
18883
18884 · Fix interaction between -x flag and –max-depth
18885
18886 · Mount
18887
18888 · Implement proper directory handling (mkdir, rmdir, renaming)
18889
18890 · Make include and exclude filters apply to mount
18891
18892 · Implement read and write async buffers - control with –buffer-size
18893
18894 · Fix fsync on for directories
18895
18896 · Fix retry on network failure when reading off crypt
18897
18898 · Crypt
18899
18900 · Add –crypt-show-mapping to show encrypted file mapping
18901
18902 · Fix crypt writer getting stuck in a loop
18903
18904 · IMPORTANT this bug had the potential to cause data corruption
18905 when
18906
18907 · reading data from a network based remote and
18908
18909 · writing to a crypt on Google Drive
18910
18911 · Use the cryptcheck command to validate your data if you are con‐
18912 cerned
18913
18914 · If syncing two crypt remotes, sync the unencrypted remote
18915
18916 · Amazon Drive
18917
18918 · Fix panics on Move (rename)
18919
18920 · Fix panic on token expiry
18921
18922 · B2
18923
18924 · Fix inconsistent listings and rclone check
18925
18926 · Fix uploading empty files with go1.8
18927
18928 · Constrain memory usage when doing multipart uploads
18929
18930 · Fix upload url not being refreshed properly
18931
18932 · Drive
18933
18934 · Fix Rmdir on directories with trashed files
18935
18936 · Fix “Ignoring unknown object” when downloading
18937
18938 · Add –drive-list-chunk
18939
18940 · Add –drive-skip-gdocs (Károly Oláh)
18941
18942 · OneDrive
18943
18944 · Implement Move
18945
18946 · Fix Copy
18947
18948 · Fix overwrite detection in Copy
18949
18950 · Fix waitForJob to parse errors correctly
18951
18952 · Use token renewer to stop auth errors on long uploads
18953
18954 · Fix uploading empty files with go1.8
18955
18956 · Google Cloud Storage
18957
18958 · Fix depth 1 directory listings
18959
18960 · Yandex
18961
18962 · Fix single level directory listing
18963
18964 · Dropbox
18965
18966 · Normalise the case for single level directory listings
18967
18968 · Fix depth 1 listing
18969
18970 · S3
18971
18972 · Added ca-central-1 region (Jon Yergatian)
18973
18974 v1.35 - 2017-01-02
18975 · New Features
18976
18977 · moveto and copyto commands for choosing a destination name on
18978 copy/move
18979
18980 · rmdirs command to recursively delete empty directories
18981
18982 · Allow repeated –include/–exclude/–filter options
18983
18984 · Only show transfer stats on commands which transfer stuff
18985
18986 · show stats on any command using the --stats flag
18987
18988 · Allow overlapping directories in move when server side dir move is
18989 supported
18990
18991 · Add –stats-unit option - thanks Scott McGillivray
18992
18993 · Bug Fixes
18994
18995 · Fix the config file being overwritten when two rclones are running
18996
18997 · Make rclone lsd obey the filters properly
18998
18999 · Fix compilation on mips
19000
19001 · Fix not transferring files that don't differ in size
19002
19003 · Fix panic on nil retry/fatal error
19004
19005 · Mount
19006
19007 · Retry reads on error - should help with reliability a lot
19008
19009 · Report the modification times for directories from the remote
19010
19011 · Add bandwidth accounting and limiting (fixes –bwlimit)
19012
19013 · If –stats provided will show stats and which files are transferring
19014
19015 · Support R/W files if truncate is set.
19016
19017 · Implement statfs interface so df works
19018
19019 · Note that write is now supported on Amazon Drive
19020
19021 · Report number of blocks in a file - thanks Stefan Breunig
19022
19023 · Crypt
19024
19025 · Prevent the user pointing crypt at itself
19026
19027 · Fix failed to authenticate decrypted block errors
19028
19029 · these will now return the underlying unexpected EOF instead
19030
19031 · Amazon Drive
19032
19033 · Add support for server side move and directory move - thanks Stefan
19034 Breunig
19035
19036 · Fix nil pointer deref on size attribute
19037
19038 · B2
19039
19040 · Use new prefix and delimiter parameters in directory listings
19041
19042 · This makes –max-depth 1 dir listings as used in mount much faster
19043
19044 · Reauth the account while doing uploads too - should help with token
19045 expiry
19046
19047 · Drive
19048
19049 · Make DirMove more efficient and complain about moving the root
19050
19051 · Create destination directory on Move()
19052
19053 v1.34 - 2016-11-06
19054 · New Features
19055
19056 · Stop single file and --files-from operations iterating through the
19057 source bucket.
19058
19059 · Stop removing failed upload to cloud storage remotes
19060
19061 · Make ContentType be preserved for cloud to cloud copies
19062
19063 · Add support to toggle bandwidth limits via SIGUSR2 - thanks Marco
19064 Paganini
19065
19066 · rclone check shows count of hashes that couldn't be checked
19067
19068 · rclone listremotes command
19069
19070 · Support linux/arm64 build - thanks Fredrik Fornwall
19071
19072 · Remove Authorization: lines from --dump-headers output
19073
19074 · Bug Fixes
19075
19076 · Ignore files with control characters in the names
19077
19078 · Fix rclone move command
19079
19080 · Delete src files which already existed in dst
19081
19082 · Fix deletion of src file when dst file older
19083
19084 · Fix rclone check on crypted file systems
19085
19086 · Make failed uploads not count as “Transferred”
19087
19088 · Make sure high level retries show with -q
19089
19090 · Use a vendor directory with godep for repeatable builds
19091
19092 · rclone mount - FUSE
19093
19094 · Implement FUSE mount options
19095
19096 · --no-modtime, --debug-fuse, --read-only, --allow-non-empty, --al‐
19097 low-root, --allow-other
19098
19099 · --default-permissions, --write-back-cache, --max-read-ahead,
19100 --umask, --uid, --gid
19101
19102 · Add --dir-cache-time to control caching of directory entries
19103
19104 · Implement seek for files opened for read (useful for video players)
19105
19106 · with -no-seek flag to disable
19107
19108 · Fix crash on 32 bit ARM (alignment of 64 bit counter)
19109
19110 · ...and many more internal fixes and improvements!
19111
19112 · Crypt
19113
19114 · Don't show encrypted password in configurator to stop confusion
19115
19116 · Amazon Drive
19117
19118 · New wait for upload option --acd-upload-wait-per-gb
19119
19120 · upload timeouts scale by file size and can be disabled
19121
19122 · Add 502 Bad Gateway to list of errors we retry
19123
19124 · Fix overwriting a file with a zero length file
19125
19126 · Fix ACD file size warning limit - thanks Felix Bünemann
19127
19128 · Local
19129
19130 · Unix: implement -x/--one-file-system to stay on a single file sys‐
19131 tem
19132
19133 · thanks Durval Menezes and Luiz Carlos Rumbelsperger Viana
19134
19135 · Windows: ignore the symlink bit on files
19136
19137 · Windows: Ignore directory based junction points
19138
19139 · B2
19140
19141 · Make sure each upload has at least one upload slot - fixes strange
19142 upload stats
19143
19144 · Fix uploads when using crypt
19145
19146 · Fix download of large files (sha1 mismatch)
19147
19148 · Return error when we try to create a bucket which someone else owns
19149
19150 · Update B2 docs with Data usage, and Crypt section - thanks Tomasz
19151 Mazur
19152
19153 · S3
19154
19155 · Command line and config file support for
19156
19157 · Setting/overriding ACL - thanks Radek Senfeld
19158
19159 · Setting storage class - thanks Asko Tamm
19160
19161 · Drive
19162
19163 · Make exponential backoff work exactly as per Google specification
19164
19165 · add .epub, .odp and .tsv as export formats.
19166
19167 · Swift
19168
19169 · Don't read metadata for directory marker objects
19170
19171 v1.33 - 2016-08-24
19172 · New Features
19173
19174 · Implement encryption
19175
19176 · data encrypted in NACL secretbox format
19177
19178 · with optional file name encryption
19179
19180 · New commands
19181
19182 · rclone mount - implements FUSE mounting of remotes (EXPERIMENTAL)
19183
19184 · works on Linux, FreeBSD and OS X (need testers for the last 2!)
19185
19186 · rclone cat - outputs remote file or files to the terminal
19187
19188 · rclone genautocomplete - command to make a bash completion script
19189 for rclone
19190
19191 · Editing a remote using rclone config now goes through the wizard
19192
19193 · Compile with go 1.7 - this fixes rclone on macOS Sierra and on 386
19194 processors
19195
19196 · Use cobra for sub commands and docs generation
19197
19198 · drive
19199
19200 · Document how to make your own client_id
19201
19202 · s3
19203
19204 · User-configurable Amazon S3 ACL (thanks Radek Šenfeld)
19205
19206 · b2
19207
19208 · Fix stats accounting for upload - no more jumping to 100% done
19209
19210 · On cleanup delete hide marker if it is the current file
19211
19212 · New B2 API endpoint (thanks Per Cederberg)
19213
19214 · Set maximum backoff to 5 Minutes
19215
19216 · onedrive
19217
19218 · Fix URL escaping in file names - eg uploading files with + in them.
19219
19220 · amazon cloud drive
19221
19222 · Fix token expiry during large uploads
19223
19224 · Work around 408 REQUEST_TIMEOUT and 504 GATEWAY_TIMEOUT errors
19225
19226 · local
19227
19228 · Fix filenames with invalid UTF-8 not being uploaded
19229
19230 · Fix problem with some UTF-8 characters on OS X
19231
19232 v1.32 - 2016-07-13
19233 · Backblaze B2
19234
19235 · Fix upload of files large files not in root
19236
19237 v1.31 - 2016-07-13
19238 · New Features
19239
19240 · Reduce memory on sync by about 50%
19241
19242 · Implement –no-traverse flag to stop copy traversing the destination
19243 remote.
19244
19245 · This can be used to reduce memory usage down to the smallest pos‐
19246 sible.
19247
19248 · Useful to copy a small number of files into a large destination
19249 folder.
19250
19251 · Implement cleanup command for emptying trash / removing old ver‐
19252 sions of files
19253
19254 · Currently B2 only
19255
19256 · Single file handling improved
19257
19258 · Now copied with –files-from
19259
19260 · Automatically sets –no-traverse when copying a single file
19261
19262 · Info on using installing with ansible - thanks Stefan Weichinger
19263
19264 · Implement –no-update-modtime flag to stop rclone fixing the remote
19265 modified times.
19266
19267 · Bug Fixes
19268
19269 · Fix move command - stop it running for overlapping Fses - this was
19270 causing data loss.
19271
19272 · Local
19273
19274 · Fix incomplete hashes - this was causing problems for B2.
19275
19276 · Amazon Drive
19277
19278 · Rename Amazon Cloud Drive to Amazon Drive - no changes to config
19279 file needed.
19280
19281 · Swift
19282
19283 · Add support for non-default project domain - thanks Antonio Messi‐
19284 na.
19285
19286 · S3
19287
19288 · Add instructions on how to use rclone with minio.
19289
19290 · Add ap-northeast-2 (Seoul) and ap-south-1 (Mumbai) regions.
19291
19292 · Skip setting the modified time for objects > 5GB as it isn't possi‐
19293 ble.
19294
19295 · Backblaze B2
19296
19297 · Add –b2-versions flag so old versions can be listed and retreived.
19298
19299 · Treat 403 errors (eg cap exceeded) as fatal.
19300
19301 · Implement cleanup command for deleting old file versions.
19302
19303 · Make error handling compliant with B2 integrations notes.
19304
19305 · Fix handling of token expiry.
19306
19307 · Implement –b2-test-mode to set X-Bz-Test-Mode header.
19308
19309 · Set cutoff for chunked upload to 200MB as per B2 guidelines.
19310
19311 · Make upload multi-threaded.
19312
19313 · Dropbox
19314
19315 · Don't retry 461 errors.
19316
19317 v1.30 - 2016-06-18
19318 · New Features
19319
19320 · Directory listing code reworked for more features and better error
19321 reporting (thanks to Klaus Post for help). This enables
19322
19323 · Directory include filtering for efficiency
19324
19325 · –max-depth parameter
19326
19327 · Better error reporting
19328
19329 · More to come
19330
19331 · Retry more errors
19332
19333 · Add –ignore-size flag - for uploading images to onedrive
19334
19335 · Log -v output to stdout by default
19336
19337 · Display the transfer stats in more human readable form
19338
19339 · Make 0 size files specifiable with --max-size 0b
19340
19341 · Add b suffix so we can specify bytes in –bwlimit, –min-size etc
19342
19343 · Use “password:” instead of “password>” prompt - thanks Klaus Post
19344 and Leigh Klotz
19345
19346 · Bug Fixes
19347
19348 · Fix retry doing one too many retries
19349
19350 · Local
19351
19352 · Fix problems with OS X and UTF-8 characters
19353
19354 · Amazon Drive
19355
19356 · Check a file exists before uploading to help with 408 Conflict er‐
19357 rors
19358
19359 · Reauth on 401 errors - this has been causing a lot of problems
19360
19361 · Work around spurious 403 errors
19362
19363 · Restart directory listings on error
19364
19365 · Google Drive
19366
19367 · Check a file exists before uploading to help with duplicates
19368
19369 · Fix retry of multipart uploads
19370
19371 · Backblaze B2
19372
19373 · Implement large file uploading
19374
19375 · S3
19376
19377 · Add AES256 server-side encryption for - thanks Justin R. Wilson
19378
19379 · Google Cloud Storage
19380
19381 · Make sure we don't use conflicting content types on upload
19382
19383 · Add service account support - thanks Michal Witkowski
19384
19385 · Swift
19386
19387 · Add auth version parameter
19388
19389 · Add domain option for openstack (v3 auth) - thanks Fabian Ruff
19390
19391 v1.29 - 2016-04-18
19392 · New Features
19393
19394 · Implement -I, --ignore-times for unconditional upload
19395
19396 · Improve dedupecommand
19397
19398 · Now removes identical copies without asking
19399
19400 · Now obeys --dry-run
19401
19402 · Implement --dedupe-mode for non interactive running
19403
19404 · --dedupe-mode interactive - interactive the default.
19405
19406 · --dedupe-mode skip - removes identical files then skips any‐
19407 thing left.
19408
19409 · --dedupe-mode first - removes identical files then keeps the
19410 first one.
19411
19412 · --dedupe-mode newest - removes identical files then keeps the
19413 newest one.
19414
19415 · --dedupe-mode oldest - removes identical files then keeps the
19416 oldest one.
19417
19418 · --dedupe-mode rename - removes identical files then renames the
19419 rest to be different.
19420
19421 · Bug fixes
19422
19423 · Make rclone check obey the --size-only flag.
19424
19425 · Use “application/octet-stream” if discovered mime type is invalid.
19426
19427 · Fix missing “quit” option when there are no remotes.
19428
19429 · Google Drive
19430
19431 · Increase default chunk size to 8 MB - increases upload speed of big
19432 files
19433
19434 · Speed up directory listings and make more reliable
19435
19436 · Add missing retries for Move and DirMove - increases reliability
19437
19438 · Preserve mime type on file update
19439
19440 · Backblaze B2
19441
19442 · Enable mod time syncing
19443
19444 · This means that B2 will now check modification times
19445
19446 · It will upload new files to update the modification times
19447
19448 · (there isn't an API to just set the mod time.)
19449
19450 · If you want the old behaviour use --size-only.
19451
19452 · Update API to new version
19453
19454 · Fix parsing of mod time when not in metadata
19455
19456 · Swift/Hubic
19457
19458 · Don't return an MD5SUM for static large objects
19459
19460 · S3
19461
19462 · Fix uploading files bigger than 50GB
19463
19464 v1.28 - 2016-03-01
19465 · New Features
19466
19467 · Configuration file encryption - thanks Klaus Post
19468
19469 · Improve rclone config adding more help and making it easier to un‐
19470 derstand
19471
19472 · Implement -u/--update so creation times can be used on all remotes
19473
19474 · Implement --low-level-retries flag
19475
19476 · Optionally disable gzip compression on downloads with --no-gzip-en‐
19477 coding
19478
19479 · Bug fixes
19480
19481 · Don't make directories if --dry-run set
19482
19483 · Fix and document the move command
19484
19485 · Fix redirecting stderr on unix-like OSes when using --log-file
19486
19487 · Fix delete command to wait until all finished - fixes missing
19488 deletes.
19489
19490 · Backblaze B2
19491
19492 · Use one upload URL per go routine fixes more than one upload us‐
19493 ing auth token
19494
19495 · Add pacing, retries and reauthentication - fixes token expiry prob‐
19496 lems
19497
19498 · Upload without using a temporary file from local (and remotes which
19499 support SHA1)
19500
19501 · Fix reading metadata for all files when it shouldn't have been
19502
19503 · Drive
19504
19505 · Fix listing drive documents at root
19506
19507 · Disable copy and move for Google docs
19508
19509 · Swift
19510
19511 · Fix uploading of chunked files with non ASCII characters
19512
19513 · Allow setting of storage_url in the config - thanks Xavier Lucas
19514
19515 · S3
19516
19517 · Allow IAM role and credentials from environment variables - thanks
19518 Brian Stengaard
19519
19520 · Allow low privilege users to use S3 (check if directory exists dur‐
19521 ing Mkdir) - thanks Jakub Gedeon
19522
19523 · Amazon Drive
19524
19525 · Retry on more things to make directory listings more reliable
19526
19527 v1.27 - 2016-01-31
19528 · New Features
19529
19530 · Easier headless configuration with rclone authorize
19531
19532 · Add support for multiple hash types - we now check SHA1 as well as
19533 MD5 hashes.
19534
19535 · delete command which does obey the filters (unlike purge)
19536
19537 · dedupe command to deduplicate a remote. Useful with Google Drive.
19538
19539 · Add --ignore-existing flag to skip all files that exist on destina‐
19540 tion.
19541
19542 · Add --delete-before, --delete-during, --delete-after flags.
19543
19544 · Add --memprofile flag to debug memory use.
19545
19546 · Warn the user about files with same name but different case
19547
19548 · Make --include rules add their implict exclude * at the end of the
19549 filter list
19550
19551 · Deprecate compiling with go1.3
19552
19553 · Amazon Drive
19554
19555 · Fix download of files > 10 GB
19556
19557 · Fix directory traversal (“Next token is expired”) for large direc‐
19558 tory listings
19559
19560 · Remove 409 conflict from error codes we will retry - stops very
19561 long pauses
19562
19563 · Backblaze B2
19564
19565 · SHA1 hashes now checked by rclone core
19566
19567 · Drive
19568
19569 · Add --drive-auth-owner-only to only consider files owned by the us‐
19570 er - thanks Björn Harrtell
19571
19572 · Export Google documents
19573
19574 · Dropbox
19575
19576 · Make file exclusion error controllable with -q
19577
19578 · Swift
19579
19580 · Fix upload from unprivileged user.
19581
19582 · S3
19583
19584 · Fix updating of mod times of files with + in.
19585
19586 · Local
19587
19588 · Add local file system option to disable UNC on Windows.
19589
19590 v1.26 - 2016-01-02
19591 · New Features
19592
19593 · Yandex storage backend - thank you Dmitry Burdeev (“dibu”)
19594
19595 · Implement Backblaze B2 storage backend
19596
19597 · Add –min-age and –max-age flags - thank you Adriano Aurélio
19598 Meirelles
19599
19600 · Make ls/lsl/md5sum/size/check obey includes and excludes
19601
19602 · Fixes
19603
19604 · Fix crash in http logging
19605
19606 · Upload releases to github too
19607
19608 · Swift
19609
19610 · Fix sync for chunked files
19611
19612 · OneDrive
19613
19614 · Re-enable server side copy
19615
19616 · Don't mask HTTP error codes with JSON decode error
19617
19618 · S3
19619
19620 · Fix corrupting Content-Type on mod time update (thanks Joseph
19621 Spurrier)
19622
19623 v1.25 - 2015-11-14
19624 · New features
19625
19626 · Implement Hubic storage system
19627
19628 · Fixes
19629
19630 · Fix deletion of some excluded files without –delete-excluded
19631
19632 · This could have deleted files unexpectedly on sync
19633
19634 · Always check first with --dry-run!
19635
19636 · Swift
19637
19638 · Stop SetModTime losing metadata (eg X-Object-Manifest)
19639
19640 · This could have caused data loss for files > 5GB in size
19641
19642 · Use ContentType from Object to avoid lookups in listings
19643
19644 · OneDrive
19645
19646 · disable server side copy as it seems to be broken at Microsoft
19647
19648 v1.24 - 2015-11-07
19649 · New features
19650
19651 · Add support for Microsoft OneDrive
19652
19653 · Add --no-check-certificate option to disable server certificate
19654 verification
19655
19656 · Add async readahead buffer for faster transfer of big files
19657
19658 · Fixes
19659
19660 · Allow spaces in remotes and check remote names for validity at cre‐
19661 ation time
19662
19663 · Allow `&' and disallow `:' in Windows filenames.
19664
19665 · Swift
19666
19667 · Ignore directory marker objects where appropriate - allows working
19668 with Hubic
19669
19670 · Don't delete the container if fs wasn't at root
19671
19672 · S3
19673
19674 · Don't delete the bucket if fs wasn't at root
19675
19676 · Google Cloud Storage
19677
19678 · Don't delete the bucket if fs wasn't at root
19679
19680 v1.23 - 2015-10-03
19681 · New features
19682
19683 · Implement rclone size for measuring remotes
19684
19685 · Fixes
19686
19687 · Fix headless config for drive and gcs
19688
19689 · Tell the user they should try again if the webserver method failed
19690
19691 · Improve output of --dump-headers
19692
19693 · S3
19694
19695 · Allow anonymous access to public buckets
19696
19697 · Swift
19698
19699 · Stop chunked operations logging “Failed to read info: Object Not
19700 Found”
19701
19702 · Use Content-Length on uploads for extra reliability
19703
19704 v1.22 - 2015-09-28
19705 · Implement rsync like include and exclude flags
19706
19707 · swift
19708
19709 · Support files > 5GB - thanks Sergey Tolmachev
19710
19711 v1.21 - 2015-09-22
19712 · New features
19713
19714 · Display individual transfer progress
19715
19716 · Make lsl output times in localtime
19717
19718 · Fixes
19719
19720 · Fix allowing user to override credentials again in Drive, GCS and
19721 ACD
19722
19723 · Amazon Drive
19724
19725 · Implement compliant pacing scheme
19726
19727 · Google Drive
19728
19729 · Make directory reads concurrent for increased speed.
19730
19731 v1.20 - 2015-09-15
19732 · New features
19733
19734 · Amazon Drive support
19735
19736 · Oauth support redone - fix many bugs and improve usability
19737
19738 · Use “golang.org/x/oauth2” as oauth libary of choice
19739
19740 · Improve oauth usability for smoother initial signup
19741
19742 · drive, googlecloudstorage: optionally use auto config for the
19743 oauth token
19744
19745 · Implement –dump-headers and –dump-bodies debug flags
19746
19747 · Show multiple matched commands if abbreviation too short
19748
19749 · Implement server side move where possible
19750
19751 · local
19752
19753 · Always use UNC paths internally on Windows - fixes a lot of bugs
19754
19755 · dropbox
19756
19757 · force use of our custom transport which makes timeouts work
19758
19759 · Thanks to Klaus Post for lots of help with this release
19760
19761 v1.19 - 2015-08-28
19762 · New features
19763
19764 · Server side copies for s3/swift/drive/dropbox/gcs
19765
19766 · Move command - uses server side copies if it can
19767
19768 · Implement –retries flag - tries 3 times by default
19769
19770 · Build for plan9/amd64 and solaris/amd64 too
19771
19772 · Fixes
19773
19774 · Make a current version download with a fixed URL for scripting
19775
19776 · Ignore rmdir in limited fs rather than throwing error
19777
19778 · dropbox
19779
19780 · Increase chunk size to improve upload speeds massively
19781
19782 · Issue an error message when trying to upload bad file name
19783
19784 v1.18 - 2015-08-17
19785 · drive
19786
19787 · Add --drive-use-trash flag so rclone trashes instead of deletes
19788
19789 · Add “Forbidden to download” message for files with no downloadURL
19790
19791 · dropbox
19792
19793 · Remove datastore
19794
19795 · This was deprecated and it caused a lot of problems
19796
19797 · Modification times and MD5SUMs no longer stored
19798
19799 · Fix uploading files > 2GB
19800
19801 · s3
19802
19803 · use official AWS SDK from github.com/aws/aws-sdk-go
19804
19805 · NB will most likely require you to delete and recreate remote
19806
19807 · enable multipart upload which enables files > 5GB
19808
19809 · tested with Ceph / RadosGW / S3 emulation
19810
19811 · many thanks to Sam Liston and Brian Haymore at the Utah Center for
19812 High Performance Computing (https://www.chpc.utah.edu/) for a Ceph
19813 test account
19814
19815 · misc
19816
19817 · Show errors when reading the config file
19818
19819 · Do not print stats in quiet mode - thanks Leonid Shalupov
19820
19821 · Add FAQ
19822
19823 · Fix created directories not obeying umask
19824
19825 · Linux installation instructions - thanks Shimon Doodkin
19826
19827 v1.17 - 2015-06-14
19828 · dropbox: fix case insensitivity issues - thanks Leonid Shalupov
19829
19830 v1.16 - 2015-06-09
19831 · Fix uploading big files which was causing timeouts or panics
19832
19833 · Don't check md5sum after download with –size-only
19834
19835 v1.15 - 2015-06-06
19836 · Add –checksum flag to only discard transfers by MD5SUM - thanks Alex
19837 Couper
19838
19839 · Implement –size-only flag to sync on size not checksum & modtime
19840
19841 · Expand docs and remove duplicated information
19842
19843 · Document rclone's limitations with directories
19844
19845 · dropbox: update docs about case insensitivity
19846
19847 v1.14 - 2015-05-21
19848 · local: fix encoding of non utf-8 file names - fixes a duplicate file
19849 problem
19850
19851 · drive: docs about rate limiting
19852
19853 · google cloud storage: Fix compile after API change in
19854 “google.golang.org/api/storage/v1”
19855
19856 v1.13 - 2015-05-10
19857 · Revise documentation (especially sync)
19858
19859 · Implement –timeout and –conntimeout
19860
19861 · s3: ignore etags from multipart uploads which aren't md5sums
19862
19863 v1.12 - 2015-03-15
19864 · drive: Use chunked upload for files above a certain size
19865
19866 · drive: add –drive-chunk-size and –drive-upload-cutoff parameters
19867
19868 · drive: switch to insert from update when a failed copy deletes the
19869 upload
19870
19871 · core: Log duplicate files if they are detected
19872
19873 v1.11 - 2015-03-04
19874 · swift: add region parameter
19875
19876 · drive: fix crash on failed to update remote mtime
19877
19878 · In remote paths, change native directory separators to /
19879
19880 · Add synchronization to ls/lsl/lsd output to stop corruptions
19881
19882 · Ensure all stats/log messages to go stderr
19883
19884 · Add –log-file flag to log everything (including panics) to file
19885
19886 · Make it possible to disable stats printing with –stats=0
19887
19888 · Implement –bwlimit to limit data transfer bandwidth
19889
19890 v1.10 - 2015-02-12
19891 · s3: list an unlimited number of items
19892
19893 · Fix getting stuck in the configurator
19894
19895 v1.09 - 2015-02-07
19896 · windows: Stop drive letters (eg C:) getting mixed up with remotes (eg
19897 drive:)
19898
19899 · local: Fix directory separators on Windows
19900
19901 · drive: fix rate limit exceeded errors
19902
19903 v1.08 - 2015-02-04
19904 · drive: fix subdirectory listing to not list entire drive
19905
19906 · drive: Fix SetModTime
19907
19908 · dropbox: adapt code to recent library changes
19909
19910 v1.07 - 2014-12-23
19911 · google cloud storage: fix memory leak
19912
19913 v1.06 - 2014-12-12
19914 · Fix “Couldn't find home directory” on OSX
19915
19916 · swift: Add tenant parameter
19917
19918 · Use new location of Google API packages
19919
19920 v1.05 - 2014-08-09
19921 · Improved tests and consequently lots of minor fixes
19922
19923 · core: Fix race detected by go race detector
19924
19925 · core: Fixes after running errcheck
19926
19927 · drive: reset root directory on Rmdir and Purge
19928
19929 · fs: Document that Purger returns error on empty directory, test and
19930 fix
19931
19932 · google cloud storage: fix ListDir on subdirectory
19933
19934 · google cloud storage: re-read metadata in SetModTime
19935
19936 · s3: make reading metadata more reliable to work around eventual con‐
19937 sistency problems
19938
19939 · s3: strip trailing / from ListDir()
19940
19941 · swift: return directories without / in ListDir
19942
19943 v1.04 - 2014-07-21
19944 · google cloud storage: Fix crash on Update
19945
19946 v1.03 - 2014-07-20
19947 · swift, s3, dropbox: fix updated files being marked as corrupted
19948
19949 · Make compile with go 1.1 again
19950
19951 v1.02 - 2014-07-19
19952 · Implement Dropbox remote
19953
19954 · Implement Google Cloud Storage remote
19955
19956 · Verify Md5sums and Sizes after copies
19957
19958 · Remove times from “ls” command - lists sizes only
19959
19960 · Add add “lsl” - lists times and sizes
19961
19962 · Add “md5sum” command
19963
19964 v1.01 - 2014-07-04
19965 · drive: fix transfer of big files using up lots of memory
19966
19967 v1.00 - 2014-07-03
19968 · drive: fix whole second dates
19969
19970 v0.99 - 2014-06-26
19971 · Fix –dry-run not working
19972
19973 · Make compatible with go 1.1
19974
19975 v0.98 - 2014-05-30
19976 · s3: Treat missing Content-Length as 0 for some ceph installations
19977
19978 · rclonetest: add file with a space in
19979
19980 v0.97 - 2014-05-05
19981 · Implement copying of single files
19982
19983 · s3 & swift: support paths inside containers/buckets
19984
19985 v0.96 - 2014-04-24
19986 · drive: Fix multiple files of same name being created
19987
19988 · drive: Use o.Update and fs.Put to optimise transfers
19989
19990 · Add version number, -V and –version
19991
19992 v0.95 - 2014-03-28
19993 · rclone.org: website, docs and graphics
19994
19995 · drive: fix path parsing
19996
19997 v0.94 - 2014-03-27
19998 · Change remote format one last time
19999
20000 · GNU style flags
20001
20002 v0.93 - 2014-03-16
20003 · drive: store token in config file
20004
20005 · cross compile other versions
20006
20007 · set strict permissions on config file
20008
20009 v0.92 - 2014-03-15
20010 · Config fixes and –config option
20011
20012 v0.91 - 2014-03-15
20013 · Make config file
20014
20015 v0.90 - 2013-06-27
20016 · Project named rclone
20017
20018 v0.00 - 2012-11-18
20019 · Project started
20020
20021 Bugs and Limitations
20022 Empty directories are left behind / not created
20023 With remotes that have a concept of directory, eg Local and Drive, emp‐
20024 ty directories may be left behind, or not created when one was expect‐
20025 ed.
20026
20027 This is because rclone doesn't have a concept of a directory - it only
20028 works on objects. Most of the object storage systems can't actually
20029 store a directory so there is nowhere for rclone to store anything
20030 about directories.
20031
20032 You can work round this to some extent with thepurge command which will
20033 delete everything under the path, inluding empty directories.
20034
20035 This may be fixed at some point in Issue #100
20036 (https://github.com/ncw/rclone/issues/100)
20037
20038 Directory timestamps aren't preserved
20039 For the same reason as the above, rclone doesn't have a concept of a
20040 directory - it only works on objects, therefore it can't preserve the
20041 timestamps of directories.
20042
20043 Frequently Asked Questions
20044 Do all cloud storage systems support all rclone commands
20045 Yes they do. All the rclone commands (eg sync, copy etc) will work on
20046 all the remote storage systems.
20047
20048 Can I copy the config from one machine to another
20049 Sure! Rclone stores all of its config in a single file. If you want to
20050 find this file, run rclone config file which will tell you where it is.
20051
20052 See the remote setup docs (https://rclone.org/remote_setup/) for more
20053 info.
20054
20055 How do I configure rclone on a remote / headless box with no
20056 browser?
20057
20058 This has now been documented in its own remote setup page
20059 (https://rclone.org/remote_setup/).
20060
20061 Can rclone sync directly from drive to s3
20062 Rclone can sync between two remote cloud storage systems just fine.
20063
20064 Note that it effectively downloads the file and uploads it again, so
20065 the node running rclone would need to have lots of bandwidth.
20066
20067 The syncs would be incremental (on a file by file basis).
20068
20069 Eg
20070
20071 rclone sync drive:Folder s3:bucket
20072
20073 Using rclone from multiple locations at the same time
20074 You can use rclone from multiple places at the same time if you choose
20075 different subdirectory for the output, eg
20076
20077 Server A> rclone sync /tmp/whatever remote:ServerA
20078 Server B> rclone sync /tmp/whatever remote:ServerB
20079
20080 If you sync to the same directory then you should use rclone copy oth‐
20081 erwise the two rclones may delete each others files, eg
20082
20083 Server A> rclone copy /tmp/whatever remote:Backup
20084 Server B> rclone copy /tmp/whatever remote:Backup
20085
20086 The file names you upload from Server A and Server B should be differ‐
20087 ent in this case, otherwise some file systems (eg Drive) may make du‐
20088 plicates.
20089
20090 Why doesn't rclone support partial transfers / binary diffs like
20091 rsync?
20092
20093 Rclone stores each file you transfer as a native object on the remote
20094 cloud storage system. This means that you can see the files you upload
20095 as expected using alternative access methods (eg using the Google Drive
20096 web interface). There is a 1:1 mapping between files on your hard disk
20097 and objects created in the cloud storage system.
20098
20099 Cloud storage systems (at least none I've come across yet) don't sup‐
20100 port partially uploading an object. You can't take an existing object,
20101 and change some bytes in the middle of it.
20102
20103 It would be possible to make a sync system which stored binary diffs
20104 instead of whole objects like rclone does, but that would break the 1:1
20105 mapping of files on your hard disk to objects in the remote cloud stor‐
20106 age system.
20107
20108 All the cloud storage systems support partial downloads of content, so
20109 it would be possible to make partial downloads work. However to make
20110 this work efficiently this would require storing a significant amount
20111 of metadata, which breaks the desired 1:1 mapping of files to objects.
20112
20113 Can rclone do bi-directional sync?
20114 No, not at present. rclone only does uni-directional sync from A -> B.
20115 It may do in the future though since it has all the primitives - it
20116 just requires writing the algorithm to do it.
20117
20118 Can I use rclone with an HTTP proxy?
20119 Yes. rclone will follow the standard environment variables for prox‐
20120 ies, similar to cURL and other programs.
20121
20122 In general the variables are called http_proxy (for services reached
20123 over http) and https_proxy (for services reached over https). Most
20124 public services will be using https, but you may wish to set both.
20125
20126 The content of the variable is protocol://server:port. The protocol
20127 value is the one used to talk to the proxy server, itself, and is com‐
20128 monly either http or socks5.
20129
20130 Slightly annoyingly, there is no standard for the name; some applica‐
20131 tions may use http_proxy but another one HTTP_PROXY. The Go libraries
20132 used by rclone will try both variations, but you may wish to set all
20133 possibilities. So, on Linux, you may end up with code similar to
20134
20135 export http_proxy=http://proxyserver:12345
20136 export https_proxy=$http_proxy
20137 export HTTP_PROXY=$http_proxy
20138 export HTTPS_PROXY=$http_proxy
20139
20140 The NO_PROXY allows you to disable the proxy for specific hosts. Hosts
20141 must be comma separated, and can contain domains or parts. For in‐
20142 stance “foo.com” also matches “bar.foo.com”.
20143
20144 e.g.
20145
20146 export no_proxy=localhost,127.0.0.0/8,my.host.name
20147 export NO_PROXY=$no_proxy
20148
20149 Note that the ftp backend does not support ftp_proxy yet.
20150
20151 Rclone gives x509: failed to load system roots and no roots provided
20152 error
20153
20154 This means that rclone can't file the SSL root certificates. Likely
20155 you are running rclone on a NAS with a cut-down Linux OS, or possibly
20156 on Solaris.
20157
20158 Rclone (via the Go runtime) tries to load the root certificates from
20159 these places on Linux.
20160
20161 "/etc/ssl/certs/ca-certificates.crt", // Debian/Ubuntu/Gentoo etc.
20162 "/etc/pki/tls/certs/ca-bundle.crt", // Fedora/RHEL
20163 "/etc/ssl/ca-bundle.pem", // OpenSUSE
20164 "/etc/pki/tls/cacert.pem", // OpenELEC
20165
20166 So doing something like this should fix the problem. It also sets the
20167 time which is important for SSL to work properly.
20168
20169 mkdir -p /etc/ssl/certs/
20170 curl -o /etc/ssl/certs/ca-certificates.crt https://raw.githubusercontent.com/bagder/ca-bundle/master/ca-bundle.crt
20171 ntpclient -s -h pool.ntp.org
20172
20173 The two environment variables SSL_CERT_FILE and SSL_CERT_DIR, mentioned
20174 in the x509 pacakge (https://godoc.org/crypto/x509), provide an addi‐
20175 tional way to provide the SSL root certificates.
20176
20177 Note that you may need to add the --insecure option to the curl command
20178 line if it doesn't work without.
20179
20180 curl --insecure -o /etc/ssl/certs/ca-certificates.crt https://raw.githubusercontent.com/bagder/ca-bundle/master/ca-bundle.crt
20181
20182 Rclone gives Failed to load config file: function not implemented
20183 error
20184
20185 Likely this means that you are running rclone on Linux version not sup‐
20186 ported by the go runtime, ie earlier than version 2.6.23.
20187
20188 See the system requirements section in the go install docs
20189 (https://golang.org/doc/install) for full details.
20190
20191 All my uploaded docx/xlsx/pptx files appear as archive/zip
20192 This is caused by uploading these files from a Windows computer which
20193 hasn't got the Microsoft Office suite installed. The easiest way to
20194 fix is to install the Word viewer and the Microsoft Office Compatibili‐
20195 ty Pack for Word, Excel, and PowerPoint 2007 and later versions' file
20196 formats
20197
20198 tcp lookup some.domain.com no such host
20199 This happens when rclone cannot resolve a domain. Please check that
20200 your DNS setup is generally working, e.g.
20201
20202 # both should print a long list of possible IP addresses
20203 dig www.googleapis.com # resolve using your default DNS
20204 dig www.googleapis.com @8.8.8.8 # resolve with Google's DNS server
20205
20206 If you are using systemd-resolved (default on Arch Linux), ensure it is
20207 at version 233 or higher. Previous releases contain a bug which causes
20208 not all domains to be resolved properly.
20209
20210 Additionally with the GODEBUG=netdns= environment variable the Go re‐
20211 solver decision can be influenced. This also allows to resolve certain
20212 issues with DNS resolution. See the name resolution section in the go
20213 docs (https://golang.org/pkg/net/#hdr-Name_Resolution).
20214
20215 The total size reported in the stats for a sync is wrong and keeps
20216 changing
20217
20218 It is likely you have more than 10,000 files that need to be synced.
20219 By default rclone only gets 10,000 files ahead in a sync so as not to
20220 use up too much memory. You can change this default with the
20221 –max-backlog (/docs/#max-backlog-n) flag.
20222
20223 License
20224 This is free software under the terms of MIT the license (check the
20225 COPYING file included with the source code).
20226
20227 Copyright (C) 2012 by Nick Craig-Wood https://www.craig-wood.com/nick/
20228
20229 Permission is hereby granted, free of charge, to any person obtaining a copy
20230 of this software and associated documentation files (the "Software"), to deal
20231 in the Software without restriction, including without limitation the rights
20232 to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
20233 copies of the Software, and to permit persons to whom the Software is
20234 furnished to do so, subject to the following conditions:
20235
20236 The above copyright notice and this permission notice shall be included in
20237 all copies or substantial portions of the Software.
20238
20239 THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
20240 IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
20241 FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
20242 AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
20243 LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20244 OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
20245 THE SOFTWARE.
20246
20247 Authors
20248 · Nick Craig-Wood <nick@craig-wood.com>
20249
20250 Contributors
20251 · Alex Couper <amcouper@gmail.com>
20252
20253 · Leonid Shalupov <leonid@shalupov.com> <shalupov@diverse.org.ru>
20254
20255 · Shimon Doodkin <helpmepro1@gmail.com>
20256
20257 · Colin Nicholson <colin@colinn.com>
20258
20259 · Klaus Post <klauspost@gmail.com>
20260
20261 · Sergey Tolmachev <tolsi.ru@gmail.com>
20262
20263 · Adriano Aurélio Meirelles <adriano@atinge.com>
20264
20265 · C. Bess <cbess@users.noreply.github.com>
20266
20267 · Dmitry Burdeev <dibu28@gmail.com>
20268
20269 · Joseph Spurrier <github@josephspurrier.com>
20270
20271 · Björn Harrtell <bjorn@wololo.org>
20272
20273 · Xavier Lucas <xavier.lucas@corp.ovh.com>
20274
20275 · Werner Beroux <werner@beroux.com>
20276
20277 · Brian Stengaard <brian@stengaard.eu>
20278
20279 · Jakub Gedeon <jgedeon@sofi.com>
20280
20281 · Jim Tittsler <jwt@onjapan.net>
20282
20283 · Michal Witkowski <michal@improbable.io>
20284
20285 · Fabian Ruff <fabian.ruff@sap.com>
20286
20287 · Leigh Klotz <klotz@quixey.com>
20288
20289 · Romain Lapray <lapray.romain@gmail.com>
20290
20291 · Justin R. Wilson <jrw972@gmail.com>
20292
20293 · Antonio Messina <antonio.s.messina@gmail.com>
20294
20295 · Stefan G. Weichinger <office@oops.co.at>
20296
20297 · Per Cederberg <cederberg@gmail.com>
20298
20299 · Radek Šenfeld <rush@logic.cz>
20300
20301 · Fredrik Fornwall <fredrik@fornwall.net>
20302
20303 · Asko Tamm <asko@deekit.net>
20304
20305 · xor-zz <xor@gstocco.com>
20306
20307 · Tomasz Mazur <tmazur90@gmail.com>
20308
20309 · Marco Paganini <paganini@paganini.net>
20310
20311 · Felix Bünemann <buenemann@louis.info>
20312
20313 · Durval Menezes <jmrclone@durval.com>
20314
20315 · Luiz Carlos Rumbelsperger Viana <maxd13_luiz_carlos@hotmail.com>
20316
20317 · Stefan Breunig <stefan-github@yrden.de>
20318
20319 · Alishan Ladhani <ali-l@users.noreply.github.com>
20320
20321 · 0xJAKE <0xJAKE@users.noreply.github.com>
20322
20323 · Thibault Molleman <thibaultmol@users.noreply.github.com>
20324
20325 · Scott McGillivray <scott.mcgillivray@gmail.com>
20326
20327 · Bjørn Erik Pedersen <bjorn.erik.pedersen@gmail.com>
20328
20329 · Lukas Loesche <lukas@mesosphere.io>
20330
20331 · emyarod <allllaboutyou@gmail.com>
20332
20333 · T.C. Ferguson <tcf909@gmail.com>
20334
20335 · Brandur <brandur@mutelight.org>
20336
20337 · Dario Giovannetti <dev@dariogiovannetti.net>
20338
20339 · Károly Oláh <okaresz@aol.com>
20340
20341 · Jon Yergatian <jon@macfanatic.ca>
20342
20343 · Jack Schmidt <github@mowsey.org>
20344
20345 · Dedsec1 <Dedsec1@users.noreply.github.com>
20346
20347 · Hisham Zarka <hzarka@gmail.com>
20348
20349 · Jérôme Vizcaino <jerome.vizcaino@gmail.com>
20350
20351 · Mike Tesch <mjt6129@rit.edu>
20352
20353 · Marvin Watson <marvwatson@users.noreply.github.com>
20354
20355 · Danny Tsai <danny8376@gmail.com>
20356
20357 · Yoni Jah <yonjah+git@gmail.com> <yonjah+github@gmail.com>
20358
20359 · Stephen Harris <github@spuddy.org> <sweharris@users.nore‐
20360 ply.github.com>
20361
20362 · Ihor Dvoretskyi <ihor.dvoretskyi@gmail.com>
20363
20364 · Jon Craton <jncraton@gmail.com>
20365
20366 · Hraban Luyat <hraban@0brg.net>
20367
20368 · Michael Ledin <mledin89@gmail.com>
20369
20370 · Martin Kristensen <me@azgul.com>
20371
20372 · Too Much IO <toomuchio@users.noreply.github.com>
20373
20374 · Anisse Astier <anisse@astier.eu>
20375
20376 · Zahiar Ahmed <zahiar@live.com>
20377
20378 · Igor Kharin <igorkharin@gmail.com>
20379
20380 · Bill Zissimopoulos <billziss@navimatics.com>
20381
20382 · Bob Potter <bobby.potter@gmail.com>
20383
20384 · Steven Lu <tacticalazn@gmail.com>
20385
20386 · Sjur Fredriksen <sjurtf@ifi.uio.no>
20387
20388 · Ruwbin <hubus12345@gmail.com>
20389
20390 · Fabian Möller <fabianm88@gmail.com> <f.moeller@nynex.de>
20391
20392 · Edward Q. Bridges <github@eqbridges.com>
20393
20394 · Vasiliy Tolstov <v.tolstov@selfip.ru>
20395
20396 · Harshavardhana <harsha@minio.io>
20397
20398 · sainaen <sainaen@gmail.com>
20399
20400 · gdm85 <gdm85@users.noreply.github.com>
20401
20402 · Yaroslav Halchenko <debian@onerussian.com>
20403
20404 · John Papandriopoulos <jpap@users.noreply.github.com>
20405
20406 · Zhiming Wang <zmwangx@gmail.com>
20407
20408 · Andy Pilate <cubox@cubox.me>
20409
20410 · Oliver Heyme <olihey@googlemail.com> <olihey@users.nore‐
20411 ply.github.com> <de8olihe@lego.com>
20412
20413 · wuyu <wuyu@yunify.com>
20414
20415 · Andrei Dragomir <adragomi@adobe.com>
20416
20417 · Christian Brüggemann <mail@cbruegg.com>
20418
20419 · Alex McGrath Kraak <amkdude@gmail.com>
20420
20421 · bpicode <bjoern.pirnay@googlemail.com>
20422
20423 · Daniel Jagszent <daniel@jagszent.de>
20424
20425 · Josiah White <thegenius2009@gmail.com>
20426
20427 · Ishuah Kariuki <kariuki@ishuah.com> <ishuah91@gmail.com>
20428
20429 · Jan Varho <jan@varho.org>
20430
20431 · Girish Ramakrishnan <girish@cloudron.io>
20432
20433 · LingMan <LingMan@users.noreply.github.com>
20434
20435 · Jacob McNamee <jacobmcnamee@gmail.com>
20436
20437 · jersou <jertux@gmail.com>
20438
20439 · thierry <thierry@substantiel.fr>
20440
20441 · Simon Leinen <simon.leinen@gmail.com> <ubuntu@s3-test.novalocal>
20442
20443 · Dan Dascalescu <ddascalescu+github@gmail.com>
20444
20445 · Jason Rose <jason@jro.io>
20446
20447 · Andrew Starr-Bochicchio <a.starr.b@gmail.com>
20448
20449 · John Leach <john@johnleach.co.uk>
20450
20451 · Corban Raun <craun@instructure.com>
20452
20453 · Pierre Carlson <mpcarl@us.ibm.com>
20454
20455 · Ernest Borowski <er.borowski@gmail.com>
20456
20457 · Remus Bunduc <remus.bunduc@gmail.com>
20458
20459 · Iakov Davydov <iakov.davydov@unil.ch> <dav05.gith@myths.ru>
20460
20461 · Jakub Tasiemski <tasiemski@gmail.com>
20462
20463 · David Minor <dminor@saymedia.com>
20464
20465 · Tim Cooijmans <cooijmans.tim@gmail.com>
20466
20467 · Laurence <liuxy6@gmail.com>
20468
20469 · Giovanni Pizzi <gio.piz@gmail.com>
20470
20471 · Filip Bartodziej <filipbartodziej@gmail.com>
20472
20473 · Jon Fautley <jon@dead.li>
20474
20475 · lewapm <32110057+lewapm@users.noreply.github.com>
20476
20477 · Yassine Imounachen <yassine256@gmail.com>
20478
20479 · Chris Redekop <chris-redekop@users.noreply.github.com> <chris.re‐
20480 dekop@gmail.com>
20481
20482 · Jon Fautley <jon@adenoid.appstal.co.uk>
20483
20484 · Will Gunn <WillGunn@users.noreply.github.com>
20485
20486 · Lucas Bremgartner <lucas@bremis.ch>
20487
20488 · Jody Frankowski <jody.frankowski@gmail.com>
20489
20490 · Andreas Roussos <arouss1980@gmail.com>
20491
20492 · nbuchanan <nbuchanan@utah.gov>
20493
20494 · Durval Menezes <rclone@durval.com>
20495
20496 · Victor <vb-github@viblo.se>
20497
20498 · Mateusz <pabian.mateusz@gmail.com>
20499
20500 · Daniel Loader <spicypixel@gmail.com>
20501
20502 · David0rk <davidork@gmail.com>
20503
20504 · Alexander Neumann <alexander@bumpern.de>
20505
20506 · Giri Badanahatti <gbadanahatti@us.ibm.com@Giris-MacBook-Pro.local>
20507
20508 · Leo R. Lundgren <leo@finalresort.org>
20509
20510 · wolfv <wolfv6@users.noreply.github.com>
20511
20512 · Dave Pedu <dave@davepedu.com>
20513
20514 · Stefan Lindblom <lindblom@spotify.com>
20515
20516 · seuffert <oliver@seuffert.biz>
20517
20518 · gbadanahatti <37121690+gbadanahatti@users.noreply.github.com>
20519
20520 · Keith Goldfarb <barkofdelight@gmail.com>
20521
20522 · Steve Kriss <steve@heptio.com>
20523
20524 · Chih-Hsuan Yen <yan12125@gmail.com>
20525
20526 · Alexander Neumann <fd0@users.noreply.github.com>
20527
20528 · Matt Holt <mholt@users.noreply.github.com>
20529
20530 · Eri Bastos <bastos.eri@gmail.com>
20531
20532 · Michael P. Dubner <pywebmail@list.ru>
20533
20534 · Antoine GIRARD <sapk@users.noreply.github.com>
20535
20536 · Mateusz Piotrowski <mpp302@gmail.com>
20537
20538 · Animosity022 <animosity22@users.noreply.github.com> <earl.tex‐
20539 ter@gmail.com>
20540
20541 · Peter Baumgartner <pete@lincolnloop.com>
20542
20543 · Craig Rachel <craig@craigrachel.com>
20544
20545 · Michael G. Noll <miguno@users.noreply.github.com>
20546
20547 · hensur <me@hensur.de>
20548
20549 · Oliver Heyme <de8olihe@lego.com>
20550
20551 · Richard Yang <richard@yenforyang.com>
20552
20553 · Piotr Oleszczyk <piotr.oleszczyk@gmail.com>
20554
20555 · Rodrigo <rodarima@gmail.com>
20556
20557 · NoLooseEnds <NoLooseEnds@users.noreply.github.com>
20558
20559 · Jakub Karlicek <jakub@karlicek.me>
20560
20561 · John Clayton <john@codemonkeylabs.com>
20562
20563 · Kasper Byrdal Nielsen <byrdal76@gmail.com>
20564
20565 · Benjamin Joseph Dag <bjdag1234@users.noreply.github.com>
20566
20567 · themylogin <themylogin@gmail.com>
20568
20569 · Onno Zweers <onno.zweers@surfsara.nl>
20570
20571 · Jasper Lievisse Adriaanse <jasper@humppa.nl>
20572
20573 · sandeepkru <sandeep.ummadi@gmail.com> <sandeepkru@users.nore‐
20574 ply.github.com>
20575
20576 · HerrH <atomtigerzoo@users.noreply.github.com>
20577
20578 · Andrew <4030760+sparkyman215@users.noreply.github.com>
20579
20580 · dan smith <XX1011@gmail.com>
20581
20582 · Oleg Kovalov <iamolegkovalov@gmail.com>
20583
20584 · Ruben Vandamme <github-com-00ff86@vandamme.email>
20585
20586 · Cnly <minecnly@gmail.com>
20587
20588 · Andres Alvarez <1671935+kir4h@users.noreply.github.com>
20589
20590 · reddi1 <xreddi@gmail.com>
20591
20592 · Matt Tucker <matthewtckr@gmail.com>
20593
20594 · Sebastian Bünger <buengese@gmail.com>
20595
20596 · Martin Polden <mpolden@mpolden.no>
20597
20598 · Alex Chen <Cnly@users.noreply.github.com>
20599
20600 · Denis <deniskovpen@gmail.com>
20601
20602 · bsteiss <35940619+bsteiss@users.noreply.github.com>
20603
20604 · Cédric Connes <cedric.connes@gmail.com>
20605
20606 · Dr. Tobias Quathamer <toddy15@users.noreply.github.com>
20607
20608 · dcpu <42736967+dcpu@users.noreply.github.com>
20609
20610 · Sheldon Rupp <me@shel.io>
20611
20612 · albertony <12441419+albertony@users.noreply.github.com>
20613
20614 · cron410 <cron410@gmail.com>
20615
20616 · Anagh Kumar Baranwal <anaghk.dos@gmail.com>
20617
20618 · Felix Brucker <felix@felixbrucker.com>
20619
20620 · Santiago Rodríguez <scollazo@users.noreply.github.com>
20621
20622 · Craig Miskell <craig.miskell@fluxfederation.com>
20623
20624 · Antoine GIRARD <sapk@sapk.fr>
20625
20626 · Joanna Marek <joanna.marek@u2i.com>
20627
20628 · frenos <frenos@users.noreply.github.com>
20629
20630 · ssaqua <ssaqua@users.noreply.github.com>
20631
20632 · xnaas <me@xnaas.info>
20633
20634 · Frantisek Fuka <fuka@fuxoft.cz>
20635
20636 · Paul Kohout <pauljkohout@yahoo.com>
20637
20638 · dcpu <43330287+dcpu@users.noreply.github.com>
20639
20640 · jackyzy823 <jackyzy823@gmail.com>
20641
20642 · David Haguenauer <ml@kurokatta.org>
20643
20644 · teresy <hi.teresy@gmail.com>
20645
20646 · buergi <patbuergi@gmx.de>
20647
20648 · Florian Gamboeck <mail@floga.de>
20649
20650 · Ralf Hemberger <10364191+rhemberger@users.noreply.github.com>
20651
20652 · Scott Edlund <sedlund@users.noreply.github.com>
20653
20654 · Erik Swanson <erik@retailnext.net>
20655
20656 · Jake Coggiano <jake@stripe.com>
20657
20658 · brused27 <brused27@noemailaddress>
20659
20660 · Peter Kaminski <kaminski@istori.com>
20661
20662 · Henry Ptasinski <henry@logout.com>
20663
20664 · Alexander <kharkovalexander@gmail.com>
20665
20666 · Garry McNulty <garrmcnu@gmail.com>
20667
20668 · Mathieu Carbou <mathieu.carbou@gmail.com>
20669
20670 · Mark Otway <mark@otway.com>
20671
20672 · William Cocker <37018962+WilliamCocker@users.noreply.github.com>
20673
20674 · François Leurent <131.js@cloudyks.org>
20675
20676 · Arkadius Stefanski <arkste@gmail.com>
20677
20678 · Jay <dev@jaygoel.com>
20679
20680 · andrea rota <a@xelera.eu>
20681
20682 · nicolov <nicolov@users.noreply.github.com>
20683
20684 · Dario Guzik <dario@guzik.com.ar>
20685
20686 · qip <qip@users.noreply.github.com>
20687
20688 · yair@unicorn <yair@unicorn>
20689
20690 · Matt Robinson <brimstone@the.narro.ws>
20691
20692 · kayrus <kay.diam@gmail.com>
20693
20694 · Rémy Léone <remy.leone@gmail.com>
20695
20696 · Wojciech Smigielski <wojciech.hieronim.smigielski@gmail.com>
20697
20698 · weetmuts <oehrstroem@gmail.com>
20699
20700 · Jonathan <vanillajonathan@users.noreply.github.com>
20701
20702 · James Carpenter <orbsmiv@users.noreply.github.com>
20703
20704 · Vince <vince0villamora@gmail.com>
20705
20706 · Nestar47 <47841759+Nestar47@users.noreply.github.com>
20707
20708 · Six <brbsix@gmail.com>
20709
20710 · Alexandru Bumbacea <alexandru.bumbacea@booking.com>
20711
20712 · calisro <robert.calistri@gmail.com>
20713
20714 · Dr.Rx <david.rey@nventive.com>
20715
20716 · marcintustin <marcintustin@users.noreply.github.com>
20717
20718 · jaKa Močnik <jaka@koofr.net>
20719
20720 · Fionera <fionera@fionera.de>
20721
20722 · Dan Walters <dan@walters.io>
20723
20724 · Danil Semelenov <sgtpep@users.noreply.github.com>
20725
20726 · xopez <28950736+xopez@users.noreply.github.com>
20727
20728 · Ben Boeckel <mathstuf@gmail.com>
20729
20730 · Manu <manu@snapdragon.cc>
20731
20733 Forum
20734 Forum for questions and general discussion:
20735
20736 · https://forum.rclone.org
20737
20738 Gitub project
20739 The project website is at:
20740
20741 · https://github.com/ncw/rclone
20742
20743 There you can file bug reports or contribute pull requests.
20744
20745 Twitter
20746 You can also follow me on twitter for rclone announcements:
20747
20748 · [@njcw](https://twitter.com/njcw)
20749
20750 Email
20751 Or if all else fails or you want to ask something private or confiden‐
20752 tial email Nick Craig-Wood (mailto:nick@craig-wood.com)
20753
20755 Nick Craig-Wood.
20756
20757
20758
20759User Manual Apr 13, 2019 rclone(1)