1S3FS(1) User Commands S3FS(1)
2
3
4
6 S3FS - FUSE-based file system backed by Amazon S3
7
9 mounting
10 s3fs bucket[:/path] mountpoint [options]
11
12 s3fs mountpoint [options (must specify bucket= option)]
13
14 unmounting
15 umount mountpoint
16 For root.
17
18 fusermount -u mountpoint
19 For unprivileged user.
20
21 utility mode (remove interrupted multipart uploading objects)
22 s3fs --incomplete-mpu-list (-u) bucket
23
24 s3fs --incomplete-mpu-abort[=all | =<expire date format>] bucket
25
27 s3fs is a FUSE filesystem that allows you to mount an Amazon S3 bucket
28 as a local filesystem. It stores files natively and transparently in S3
29 (i.e., you can use other programs to access the same files).
30
32 s3fs supports the standard AWS credentials file (https://docs.aws.ama‐
33 zon.com/cli/latest/userguide/cli-config-files.html) stored in
34 `${HOME}/.aws/credentials`. Alternatively, s3fs supports a custom
35 passwd file. Only AWS credentials file format can be used when AWS ses‐
36 sion token is required. The s3fs password file has this format (use
37 this format if you have only one set of credentials):
38 accessKeyId:secretAccessKey
39
40 If you have more than one set of credentials, this syntax is also rec‐
41 ognized:
42 bucketName:accessKeyId:secretAccessKey
43
44 Password files can be stored in two locations:
45 /etc/passwd-s3fs [0640]
46 $HOME/.passwd-s3fs [0600]
47
48 s3fs also recognizes the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
49 environment variables.
50
52 general options
53 -h --help
54 print help
55
56 --version
57 print version
58
59 -f FUSE foreground option - do not run as daemon.
60
61 -s FUSE single-threaded option (disables multi-threaded operation)
62
63 mount options
64 All s3fs options must given in the form where "opt" is:
65 <option_name>=<option_value>
66
67 -o bucket
68 if it is not specified bucket name (and path) in command line,
69 must specify this option after -o option for bucket name.
70
71 -o default_acl (default="private")
72 the default canned acl to apply to all written s3 objects, e.g.,
73 "private", "public-read". see https://docs.aws.amazon.com/Ama‐
74 zonS3/latest/dev/acl-overview.html#canned-acl for the full list
75 of canned acls.
76
77 -o retries (default="5")
78 number of times to retry a failed S3 transaction.
79
80 -o tmpdir (default="/tmp")
81 local folder for temporary files.
82
83 -o use_cache (default="" which means disabled)
84 local folder to use for local file cache.
85
86 -o check_cache_dir_exist (default is disable)
87 If use_cache is set, check if the cache directory exists. If
88 this option is not specified, it will be created at runtime when
89 the cache directory does not exist.
90
91 -o del_cache - delete local file cache
92 delete local file cache when s3fs starts and exits.
93
94 -o storage_class (default="standard")
95 store object with specified storage class. Possible values:
96 standard, standard_ia, onezone_ia, reduced_redundancy, intelli‐
97 gent_tiering, glacier, and deep_archive.
98
99 -o use_rrs (default is disable)
100 use Amazon's Reduced Redundancy Storage. this option can not be
101 specified with use_sse. (can specify use_rrs=1 for old version)
102 this option has been replaced by new storage_class option.
103
104 -o use_sse (default is disable)
105 Specify three type Amazon's Server-Site Encryption: SSE-S3, SSE-
106 C or SSE-KMS. SSE-S3 uses Amazon S3-managed encryption keys,
107 SSE-C uses customer-provided encryption keys, and SSE-KMS uses
108 the master key which you manage in AWS KMS. You can specify
109 "use_sse" or "use_sse=1" enables SSE-S3 type (use_sse=1 is old
110 type parameter). Case of setting SSE-C, you can specify
111 "use_sse=custom", "use_sse=custom:<custom key file path>" or
112 "use_sse=<custom key file path>" (only <custom key file path>
113 specified is old type parameter). You can use "c" for short
114 "custom". The custom key file must be 600 permission. The file
115 can have some lines, each line is one SSE-C key. The first line
116 in file is used as Customer-Provided Encryption Keys for upload‐
117 ing and changing headers etc. If there are some keys after
118 first line, those are used downloading object which are en‐
119 crypted by not first key. So that, you can keep all SSE-C keys
120 in file, that is SSE-C key history. If you specify "custom"
121 ("c") without file path, you need to set custom key by
122 load_sse_c option or AWSSSECKEYS environment. (AWSSSECKEYS envi‐
123 ronment has some SSE-C keys with ":" separator.) This option is
124 used to decide the SSE type. So that if you do not want to en‐
125 crypt a object at uploading, but you need to decrypt encrypted
126 object at downloading, you can use load_sse_c option instead of
127 this option. For setting SSE-KMS, specify "use_sse=kmsid" or
128 "use_sse=kmsid:<kms id>". You can use "k" for short "kmsid".
129 If you san specify SSE-KMS type with your <kms id> in AWS KMS,
130 you can set it after "kmsid:" (or "k:"). If you specify only
131 "kmsid" ("k"), you need to set AWSSSEKMSID environment which
132 value is <kms id>. You must be careful about that you can not
133 use the KMS id which is not same EC2 region.
134
135 -o load_sse_c - specify SSE-C keys
136 Specify the custom-provided encryption keys file path for de‐
137 crypting at downloading. If you use the custom-provided encryp‐
138 tion key at uploading, you specify with "use_sse=custom". The
139 file has many lines, one line means one custom key. So that you
140 can keep all SSE-C keys in file, that is SSE-C key history.
141 AWSSSECKEYS environment is as same as this file contents.
142
143 -o passwd_file (default="")
144 specify the path to the password file, which which takes prece‐
145 dence over the password in $HOME/.passwd-s3fs and /etc/passwd-
146 s3fs
147
148 -o ahbe_conf (default="" which means disabled)
149 This option specifies the configuration file path which file is
150 the additional HTTP header by file (object) extension.
151 The configuration file format is below:
152 -----------
153 line = [file suffix or regex] HTTP-header [HTTP-values]
154 file suffix = file (object) suffix, if this field is empty, it
155 means "reg:(.*)".(=all object).
156 regex = regular expression to match the file (object)
157 path. this type starts with "reg:" prefix.
158 HTTP-header = additional HTTP header name
159 HTTP-values = additional HTTP header value
160 -----------
161 Sample:
162 -----------
163 .gz Content-Encoding gzip
164 .Z Content-Encoding compress
165 reg:^/MYDIR/(.*)[.]t2$ Content-Encoding text2
166 -----------
167 A sample configuration file is uploaded in "test" directory.
168 If you specify this option for set "Content-Encoding" HTTP
169 header, please take care for RFC 2616.
170
171 -o profile (default="default")
172 Choose a profile from ${HOME}/.aws/credentials to authenticate
173 against S3. Note that this format matches the AWS CLI format
174 and differs from the s3fs passwd format.
175
176 -o public_bucket (default="" which means disabled)
177 anonymously mount a public bucket when set to 1, ignores the
178 $HOME/.passwd-s3fs and /etc/passwd-s3fs files. S3 does not al‐
179 low copy object api for anonymous users, then s3fs sets nocopy‐
180 api option automatically when public_bucket=1 option is speci‐
181 fied.
182
183 -o connect_timeout (default="300" seconds)
184 time to wait for connection before giving up.
185
186 -o readwrite_timeout (default="120" seconds)
187 time to wait between read/write activity before giving up.
188
189 -o list_object_max_keys (default="1000")
190 specify the maximum number of keys returned by S3 list object
191 API. The default is 1000. you can set this value to 1000 or
192 more.
193
194 -o max_stat_cache_size (default="100,000" entries (about 40MB))
195 maximum number of entries in the stat cache and symbolic link
196 cache.
197
198 -o stat_cache_expire (default is 900)
199 specify expire time (seconds) for entries in the stat cache and
200 symbolic link cache. This expire time indicates the time since
201 cached.
202
203 -o stat_cache_interval_expire (default is 900)
204 specify expire time (seconds) for entries in the stat cache and
205 symbolic link cache. This expire time is based on the time from
206 the last access time of those cache. This option is exclusive
207 with stat_cache_expire, and is left for compatibility with older
208 versions.
209
210 -o enable_noobj_cache (default is disable)
211 enable cache entries for the object which does not exist. s3fs
212 always has to check whether file (or sub directory) exists under
213 object (path) when s3fs does some command, since s3fs has recog‐
214 nized a directory which does not exist and has files or sub di‐
215 rectories under itself. It increases ListBucket request and
216 makes performance bad. You can specify this option for perfor‐
217 mance, s3fs memorizes in stat cache that the object (file or di‐
218 rectory) does not exist.
219
220 -o no_check_certificate (by default this option is disabled)
221 server certificate won't be checked against the available cer‐
222 tificate authorities.
223
224 -o ssl_verify_hostname (default="2")
225 When 0, do not verify the SSL certificate against the hostname.
226
227 -o nodnscache - disable DNS cache.
228 s3fs is always using DNS cache, this option make DNS cache dis‐
229 able.
230
231 -o nosscache - disable SSL session cache.
232 s3fs is always using SSL session cache, this option make SSL
233 session cache disable.
234
235 -o multireq_max (default="20")
236 maximum number of parallel request for listing objects.
237
238 -o parallel_count (default="5")
239 number of parallel request for uploading big objects. s3fs up‐
240 loads large object (over 20MB) by multipart post request, and
241 sends parallel requests. This option limits parallel request
242 count which s3fs requests at once. It is necessary to set this
243 value depending on a CPU and a network band.
244
245 -o multipart_size (default="10")
246 part size, in MB, for each multipart request. The minimum value
247 is 5 MB and the maximum value is 5 GB.
248
249 -o multipart_copy_size (default="512")
250 part size, in MB, for each multipart copy request, used for re‐
251 names and mixupload. The minimum value is 5 MB and the maximum
252 value is 5 GB. Must be at least 512 MB to copy the maximum 5 TB
253 object size but lower values may improve performance.
254
255 -o max_dirty_data (default="5120")
256 Flush dirty data to S3 after a certain number of MB written.
257 The minimum value is 50 MB. -1 value means disable. Cannot be
258 used with nomixupload.
259
260 -o ensure_diskfree (default 0)
261 sets MB to ensure disk free space. This option means the thresh‐
262 old of free space size on disk which is used for the cache file
263 by s3fs. s3fs makes file for downloading, uploading and caching
264 files. If the disk free space is smaller than this value, s3fs
265 do not use diskspace as possible in exchange for the perfor‐
266 mance.
267
268 -o multipart_threshold (default="25")
269 threshold, in MB, to use multipart upload instead of single-
270 part. Must be at least 5 MB.
271
272 -o singlepart_copy_limit (default="512")
273 maximum size, in MB, of a single-part copy before trying multi‐
274 part copy.
275
276 -o host (default="https://s3.amazonaws.com")
277 Set a non-Amazon host, e.g., https://example.com.
278
279 -o servicepath (default="/")
280 Set a service path when the non-Amazon host requires a prefix.
281
282 -o url (default="https://s3.amazonaws.com")
283 sets the url to use to access Amazon S3. If you want to use
284 HTTP, then you can set "url=http://s3.amazonaws.com". If you do
285 not use https, please specify the URL with the url option.
286
287 -o endpoint (default="us-east-1")
288 sets the endpoint to use on signature version 4. If this option
289 is not specified, s3fs uses "us-east-1" region as the default.
290 If the s3fs could not connect to the region specified by this
291 option, s3fs could not run. But if you do not specify this op‐
292 tion, and if you can not connect with the default region, s3fs
293 will retry to automatically connect to the other region. So
294 s3fs can know the correct region name, because s3fs can find it
295 in an error from the S3 server.
296
297 -o sigv2 (default is signature version 4 falling back to version 2)
298 sets signing AWS requests by using only signature version 2.
299
300 -o sigv4 (default is signature version 4 falling back to version 2)
301 sets signing AWS requests by using only signature version 4.
302
303 -o mp_umask (default is "0000")
304 sets umask for the mount point directory. If allow_other option
305 is not set, s3fs allows access to the mount point only to the
306 owner. In the opposite case s3fs allows access to all users as
307 the default. But if you set the allow_other with this option,
308 you can control the permissions of the mount point by this op‐
309 tion like umask.
310
311 -o umask (default is "0000")
312 sets umask for files under the mountpoint. This can allow users
313 other than the mounting user to read and write to files that
314 they did not create.
315
316 -o nomultipart - disable multipart uploads
317
318 -o enable_content_md5 (default is disable)
319 Allow S3 server to check data integrity of uploads via the Con‐
320 tent-MD5 header. This can add CPU overhead to transfers.
321
322 -o ecs (default is disable)
323 This option instructs s3fs to query the ECS container credential
324 metadata address instead of the instance metadata address.
325
326 -o iam_role (default is no IAM role)
327 This option requires the IAM role name or "auto". If you specify
328 "auto", s3fs will automatically use the IAM role names that are
329 set to an instance. If you specify this option without any argu‐
330 ment, it is the same as that you have specified the "auto".
331
332 -o imdsv1only (default is to use IMDSv2 with fallback to v1)
333 AWS instance metadata service, used with IAM role authentica‐
334 tion, supports the use of an API token. If you're using an IAM
335 role in an environment that does not support IMDSv2, setting
336 this flag will skip retrieval and usage of the API token when
337 retrieving IAM credentials.
338
339 -o ibm_iam_auth (default is not using IBM IAM authentication)
340 This option instructs s3fs to use IBM IAM authentication. In
341 this mode, the AWSAccessKey and AWSSecretKey will be used as
342 IBM's Service-Instance-ID and APIKey, respectively.
343
344 -o ibm_iam_endpoint (default is https://iam.cloud.ibm.com)
345 Sets the URL to use for IBM IAM authentication.
346
347 -o use_xattr (default is not handling the extended attribute)
348 Enable to handle the extended attribute (xattrs). If you set
349 this option, you can use the extended attribute. For example,
350 encfs and ecryptfs need to support the extended attribute. No‐
351 tice: if s3fs handles the extended attribute, s3fs can not work
352 to copy command with preserve=mode.
353
354 -o noxmlns - disable registering xml name space.
355 disable registering xml name space for response of ListBucketRe‐
356 sult and ListVersionsResult etc. Default name space is looked up
357 from "http://s3.amazonaws.com/doc/2006-03-01". This option
358 should not be specified now, because s3fs looks up xmlns auto‐
359 matically after v1.66.
360
361 -o nomixupload - disable copy in multipart uploads.
362 Disable to use PUT (copy api) when multipart uploading large
363 size objects. By default, when doing multipart upload, the
364 range of unchanged data will use PUT (copy api) whenever possi‐
365 ble. When nocopyapi or norenameapi is specified, use of PUT
366 (copy api) is invalidated even if this option is not specified.
367
368 -o nocopyapi - for other incomplete compatibility object storage.
369 For a distributed object storage which is compatibility S3 API
370 without PUT (copy api). If you set this option, s3fs do not use
371 PUT with "x-amz-copy-source" (copy api). Because traffic is in‐
372 creased 2-3 times by this option, we do not recommend this.
373
374 -o norenameapi - for other incomplete compatibility object storage.
375 For a distributed object storage which is compatibility S3 API
376 without PUT (copy api). This option is a subset of nocopyapi
377 option. The nocopyapi option does not use copy-api for all com‐
378 mand (ex. chmod, chown, touch, mv, etc), but this option does
379 not use copy-api for only rename command (ex. mv). If this op‐
380 tion is specified with nocopyapi, then s3fs ignores it.
381
382 -o use_path_request_style (use legacy API calling style)
383 Enable compatibility with S3-like APIs which do not support the
384 virtual-host request style, by using the older path request
385 style.
386
387 -o listobjectsv2 (use ListObjectsV2)
388 Issue ListObjectsV2 instead of ListObjects, useful on object
389 stores without ListObjects support.
390
391 -o noua (suppress User-Agent header)
392 Usually s3fs outputs of the User-Agent in "s3fs/<version> (com‐
393 mit hash <hash>; <using ssl library name>)" format. If this op‐
394 tion is specified, s3fs suppresses the output of the User-Agent.
395
396 -o cipher_suites
397 Customize the list of TLS cipher suites. Expects a colon sepa‐
398 rated list of cipher suite names. A list of available cipher
399 suites, depending on your TLS engine, can be found on the CURL
400 library documentation: https://curl.haxx.se/docs/ssl-ci‐
401 phers.html
402
403 -o instance_name
404 The instance name of the current s3fs mountpoint. This name
405 will be added to logging messages and user agent headers sent by
406 s3fs.
407
408 -o complement_stat (complement lack of file/directory mode)
409 s3fs complements lack of information about file/directory mode
410 if a file or a directory object does not have x-amz-meta-mode
411 header. As default, s3fs does not complements stat information
412 for a object, then the object will not be able to be allowed to
413 list/modify.
414
415 -o notsup_compat_dir (not support compatibility directory types)
416 As a default, s3fs supports objects of the directory type as
417 much as possible and recognizes them as directories. Objects
418 that can be recognized as directory objects are "dir/", "dir",
419 "dir_$folder$", and there is a file object that does not have a
420 directory object but contains that directory path. s3fs needs
421 redundant communication to support all these directory types.
422 The object as the directory created by s3fs is "dir/". By re‐
423 stricting s3fs to recognize only "dir/" as a directory, communi‐
424 cation traffic can be reduced. This option is used to give this
425 restriction to s3fs. However, if there is a directory object
426 other than "dir/" in the bucket, specifying this option is not
427 recommended. s3fs may not be able to recognize the object cor‐
428 rectly if an object created by s3fs exists in the bucket.
429 Please use this option when the directory in the bucket is only
430 "dir/" object.
431
432 -o use_wtf8 - support arbitrary file system encoding.
433 S3 requires all object names to be valid UTF-8. But some
434 clients, notably Windows NFS clients, use their own encoding.
435 This option re-encodes invalid UTF-8 object names into valid
436 UTF-8 by mapping offending codes into a 'private' codepage of
437 the Unicode set. Useful on clients not using UTF-8 as their
438 file system encoding.
439
440 -o use_session_token - indicate that session token should be provided.
441 If credentials are provided by environment variables this switch
442 forces presence check of AWS_SESSION_TOKEN variable. Otherwise
443 an error is returned.
444
445 -o requester_pays (default is disable)
446 This option instructs s3fs to enable requests involving Re‐
447 quester Pays buckets (It includes the 'x-amz-request-payer=re‐
448 quester' entry in the request header).
449
450 -o mime (default is "/etc/mime.types")
451 Specify the path of the mime.types file. If this option is not
452 specified, the existence of "/etc/mime.types" is checked, and
453 that file is loaded as mime information. If this file does not
454 exist on macOS, then "/etc/apache2/mime.types" is checked as
455 well.
456
457 -o logfile - specify the log output file.
458 s3fs outputs the log file to syslog. Alternatively, if s3fs is
459 started with the "-f" option specified, the log will be output
460 to the stdout/stderr. You can use this option to specify the
461 log file that s3fs outputs. If you specify a log file with this
462 option, it will reopen the log file when s3fs receives a SIGHUP
463 signal. You can use the SIGHUP signal for log rotation.
464
465 -o dbglevel (default="crit")
466 Set the debug message level. set value as crit (critical), err
467 (error), warn (warning), info (information) to debug level. de‐
468 fault debug level is critical. If s3fs run with "-d" option,
469 the debug level is set information. When s3fs catch the signal
470 SIGUSR2, the debug level is bump up.
471
472 -o curldbg - put curl debug message
473 Put the debug message from libcurl when this option is speci‐
474 fied. Specify "normal" or "body" for the parameter. If the pa‐
475 rameter is omitted, it is the same as "normal". If "body" is
476 specified, some API communication body data will be output in
477 addition to the debug message output as "normal".
478
479 -o no_time_stamp_msg - no time stamp in debug message
480 The time stamp is output to the debug message by default. If
481 this option is specified, the time stamp will not be output in
482 the debug message. It is the same even if the environment vari‐
483 able "S3FS_MSGTIMESTAMP" is set to "no".
484
485 -o set_check_cache_sigusr1 (default is stdout)
486 If the cache is enabled, you can check the integrity of the
487 cache file and the cache file's stats info file. This option is
488 specified and when sending the SIGUSR1 signal to the s3fs
489 process checks the cache status at that time. This option can
490 take a file path as parameter to output the check result to that
491 file. The file path parameter can be omitted. If omitted, the
492 result will be output to stdout or syslog.
493
494 utility mode options
495 -u or --incomplete-mpu-list
496 Lists multipart incomplete objects uploaded to the specified
497 bucket.
498
499 --incomplete-mpu-abort all or date format (default="24H")
500 Delete the multipart incomplete object uploaded to the specified
501 bucket. If "all" is specified for this option, all multipart
502 incomplete objects will be deleted. If you specify no argument
503 as an option, objects older than 24 hours (24H) will be deleted
504 (This is the default value). You can specify an optional date
505 format. It can be specified as year, month, day, hour, minute,
506 second, and it is expressed as "Y", "M", "D", "h", "m", "s" re‐
507 spectively. For example, "1Y6M10D12h30m30s".
508
510 Most of the generic mount options described in 'man mount' are sup‐
511 ported (ro, rw, suid, nosuid, dev, nodev, exec, noexec, atime, noatime,
512 sync async, dirsync). Filesystems are mounted with '-onodev,nosuid' by
513 default, which can only be overridden by a privileged user.
514
515 There are many FUSE specific mount options that can be specified. e.g.
516 allow_other. See the FUSE README for the full set.
517
519 s3fs requires local caching for operation. You can enable a local cache
520 with "-o use_cache" or s3fs uses temporary files to cache pending re‐
521 quests to s3.
522
523 Apart from the requirements discussed below, it is recommended to keep
524 enough cache resp. temporary storage to allow one copy each of all
525 files open for reading and writing at any one time.
526
527
528 Local cache with "-o use_cache"
529
530 s3fs automatically maintains a local cache of files. The cache folder
531 is specified by the parameter of "-o use_cache". It is only a local
532 cache that can be deleted at any time. s3fs rebuilds it if necessary.
533
534 Whenever s3fs needs to read or write a file on S3, it first creates the
535 file in the cache directory and operates on it.
536
537 The amount of local cache storage used can be indirectly controlled
538 with "-o ensure_diskfree".
539
540
541 Without local cache
542
543 Since s3fs always requires some storage space for operation, it creates
544 temporary files to store incoming write requests until the required s3
545 request size is reached and the segment has been uploaded. After that,
546 this data is truncated in the temporary file to free up storage space.
547
548 Per file you need at least twice the part size (default 5MB or "-o mul‐
549 tipart_size") for writing multipart requests or space for the whole
550 file if single requests are enabled ("-o nomultipart").
551
553 The maximum size of objects that s3fs can handle depends on Amazon S3.
554 For example, up to 5 GB when using single PUT API. And up to 5 TB is
555 supported when Multipart Upload API is used.
556
557 s3fs leverages /etc/mime.types to "guess" the "correct" content-type
558 based on file name extension. This means that you can copy a website to
559 S3 and serve it up directly from S3 with correct content-types!
560
562 fuse(8), mount(8), fusermount(1), fstab(5)
563
565 Due to S3's "eventual consistency" limitations, file creation can and
566 will occasionally fail. Even after a successful create, subsequent
567 reads can fail for an indeterminate time, even after one or more suc‐
568 cessful reads. Create and read enough files and you will eventually en‐
569 counter this failure. This is not a flaw in s3fs and it is not some‐
570 thing a FUSE wrapper like s3fs can work around. The retries option does
571 not address this issue. Your application must either tolerate or com‐
572 pensate for these failures, for example by retrying creates or reads.
573
575 s3fs has been written by Randy Rizun <rrizun@gmail.com>.
576
577
578
579S3FS February 2011 S3FS(1)