1S3FS(1) User Commands S3FS(1)
2
3
4
6 S3FS - FUSE-based file system backed by Amazon S3
7
9 mounting
10 s3fs bucket[:/path] mountpoint [options]
11
12 s3fs mountpoint [options(must specify bucket= option)]
13
14 unmounting
15 umount mountpoint
16 For root.
17
18 fusermount -u mountpoint
19 For unprivileged user.
20
21 utility mode ( remove interrupted multipart uploading objects )
22 s3fs --incomplete-mpu-list(-u) bucket
23
24 s3fs --incomplete-mpu-abort[=all | =<expire date format>] bucket
25
27 s3fs is a FUSE filesystem that allows you to mount an Amazon S3 bucket
28 as a local filesystem. It stores files natively and transparently in S3
29 (i.e., you can use other programs to access the same files).
30
32 s3fs supports the standard AWS credentials (filehttps://docs.aws.ama‐
33 zon.com/cli/latest/userguide/cli-config-files.html) stored in
34 `${HOME}/.aws/credentials`. Alternatively, s3fs supports a custom
35 passwd file. The s3fs password file has this format (use this format
36 if you have only one set of credentials):
37 accessKeyId:secretAccessKey
38
39 If you have more than one set of credentials, this syntax is also rec‐
40 ognized:
41 bucketName:accessKeyId:secretAccessKey
42
43 Password files can be stored in two locations:
44 /etc/passwd-s3fs [0640]
45 $HOME/.passwd-s3fs [0600]
46
48 general options
49 -h --help
50 print help
51
52 --version
53 print version
54
55 -f FUSE foreground option - do not run as daemon.
56
57 -s FUSE singlethreaded option (disables multi-threaded operation)
58
59 mount options
60 All s3fs options must given in the form where "opt" is:
61 <option_name>=<option_value>
62
63 -o bucket
64 if it is not specified bucket name(and path) in command line,
65 must specify this option after -o option for bucket name.
66
67 -o default_acl (default="private")
68 the default canned acl to apply to all written s3 objects, e.g.,
69 "private", "public-read". empty string means do not send
70 header. see https://docs.aws.amazon.com/AmazonS3/lat‐
71 est/dev/acl-overview.html#canned-acl for the full list of canned
72 acls.
73
74 -o retries (default="5")
75 number of times to retry a failed S3 transaction.
76
77 -o use_cache (default="" which means disabled)
78 local folder to use for local file cache.
79
80 -o check_cache_dir_exist (default is disable)
81 If use_cache is set, check if the cache directory exists. If
82 this option is not specified, it will be created at runtime when
83 the cache directory does not exist.
84
85 -o del_cache - delete local file cache
86 delete local file cache when s3fs starts and exits.
87
88 -o storage_class (default is standard)
89 store object with specified storage class. this option replaces
90 the old option use_rrs. Possible values: standard, standard_ia,
91 onezone_ia and reduced_redundancy.
92
93 -o use_rrs (default is disable)
94 use Amazon's Reduced Redundancy Storage. this option can not be
95 specified with use_sse. (can specify use_rrs=1 for old version)
96 this option has been replaced by new storage_class option.
97
98 -o use_sse (default is disable)
99 Specify three type Amazon's Server-Site Encryption: SSE-S3, SSE-
100 C or SSE-KMS. SSE-S3 uses Amazon S3-managed encryption keys,
101 SSE-C uses customer-provided encryption keys, and SSE-KMS uses
102 the master key which you manage in AWS KMS. You can specify
103 "use_sse" or "use_sse=1" enables SSE-S3 type (use_sse=1 is old
104 type parameter). Case of setting SSE-C, you can specify
105 "use_sse=custom", "use_sse=custom:<custom key file path>" or
106 "use_sse=<custom key file path>"(only <custom key file path>
107 specified is old type parameter). You can use "c" for short
108 "custom". The custom key file must be 600 permission. The file
109 can have some lines, each line is one SSE-C key. The first line
110 in file is used as Customer-Provided Encryption Keys for upload‐
111 ing and changing headers etc. If there are some keys after
112 first line, those are used downloading object which are
113 encrypted by not first key. So that, you can keep all SSE-C
114 keys in file, that is SSE-C key history. If you specify "cus‐
115 tom"("c") without file path, you need to set custom key by
116 load_sse_c option or AWSSSECKEYS environment.(AWSSSECKEYS envi‐
117 ronment has some SSE-C keys with ":" separator.) This option is
118 used to decide the SSE type. So that if you do not want to
119 encrypt a object at uploading, but you need to decrypt encrypted
120 object at downloading, you can use load_sse_c option instead of
121 this option. For setting SSE-KMS, specify "use_sse=kmsid" or
122 "use_sse=kmsid:<kms id>". You can use "k" for short "kmsid".
123 If you san specify SSE-KMS type with your <kms id> in AWS KMS,
124 you can set it after "kmsid:"(or "k:"). If you specify only
125 "kmsid"("k"), you need to set AWSSSEKMSID environment which
126 value is <kms id>. You must be careful about that you can not
127 use the KMS id which is not same EC2 region.
128
129 -o load_sse_c - specify SSE-C keys
130 Specify the custom-provided encryption keys file path for
131 decrypting at downloading. If you use the custom-provided
132 encryption key at uploading, you specify with "use_sse=custom".
133 The file has many lines, one line means one custom key. So that
134 you can keep all SSE-C keys in file, that is SSE-C key history.
135 AWSSSECKEYS environment is as same as this file contents.
136
137 -o passwd_file (default="")
138 specify the path to the password file, which which takes prece‐
139 dence over the password in $HOME/.passwd-s3fs and /etc/passwd-
140 s3fs
141
142 -o ahbe_conf (default="" which means disabled)
143 This option specifies the configuration file path which file is
144 the additional HTTP header by file(object) extension.
145 The configuration file format is below:
146 -----------
147 line = [file suffix or regex] HTTP-header [HTTP-values]
148 file suffix = file(object) suffix, if this field is empty, it
149 means "reg:(.*)".(=all object).
150 regex = regular expression to match the file(object)
151 path. this type starts with "reg:" prefix.
152 HTTP-header = additional HTTP header name
153 HTTP-values = additional HTTP header value
154 -----------
155 Sample:
156 -----------
157 .gz Content-Encoding gzip
158 .Z Content-Encoding compress
159 reg:^/MYDIR/(.*)[.]t2$ Content-Encoding text2
160 -----------
161 A sample configuration file is uploaded in "test" directory.
162 If you specify this option for set "Content-Encoding" HTTP
163 header, please take care for RFC 2616.
164
165 -o profile (default="default")
166 Choose a profile from ${HOME}/.aws/credentials to authenticate
167 against S3. Note that this format matches the AWS CLI format
168 and differs from the s3fs passwd format.
169
170 -o public_bucket (default="" which means disabled)
171 anonymously mount a public bucket when set to 1, ignores the
172 $HOME/.passwd-s3fs and /etc/passwd-s3fs files. S3 does not
173 allow copy object api for anonymous users, then s3fs sets
174 nocopyapi option automatically when public_bucket=1 option is
175 specified.
176
177 -o connect_timeout (default="300" seconds)
178 time to wait for connection before giving up.
179
180 -o readwrite_timeout (default="60" seconds)
181 time to wait between read/write activity before giving up.
182
183 -o list_object_max_keys (default="1000")
184 specify the maximum number of keys returned by S3 list object
185 API. The default is 1000. you can set this value to 1000 or
186 more.
187
188 -o max_stat_cache_size (default="100,000" entries (about 40MB))
189 maximum number of entries in the stat cache
190
191 -o stat_cache_expire (default is no expire)
192 specify expire time(seconds) for entries in the stat cache. This
193 expire time indicates the time since stat cached.
194
195 -o stat_cache_interval_expire (default is no expire)
196 specify expire time(seconds) for entries in the stat cache. This
197 expire time is based on the time from the last access time of
198 the stat cache. This option is exclusive with
199 stat_cache_expire, and is left for compatibility with older ver‐
200 sions.
201
202 -o enable_noobj_cache (default is disable)
203 enable cache entries for the object which does not exist. s3fs
204 always has to check whether file(or sub directory) exists under
205 object(path) when s3fs does some command, since s3fs has recog‐
206 nized a directory which does not exist and has files or sub
207 directories under itself. It increases ListBucket request and
208 makes performance bad. You can specify this option for perfor‐
209 mance, s3fs memorizes in stat cache that the object(file or
210 directory) does not exist.
211
212 -o no_check_certificate (by default this option is disabled)
213 do not check ssl certificate. server certificate won't be
214 checked against the available certificate authorities.
215
216 -o nodnscache - disable dns cache.
217 s3fs is always using dns cache, this option make dns cache dis‐
218 able.
219
220 -o nosscache - disable ssl session cache.
221 s3fs is always using ssl session cache, this option make ssl
222 session cache disable.
223
224 -o multireq_max (default="20")
225 maximum number of parallel request for listing objects.
226
227 -o parallel_count (default="5")
228 number of parallel request for uploading big objects. s3fs
229 uploads large object(default:over 20MB) by multipart post
230 request, and sends parallel requests. This option limits paral‐
231 lel request count which s3fs requests at once. It is necessary
232 to set this value depending on a CPU and a network band.
233
234 -o multipart_size(default="10"(10MB))
235 number of one part size in multipart uploading request. The
236 default size is 10MB(10485760byte), minimum value is
237 5MB(5242880byte). Specify number of MB and over 5(MB).
238
239 -o ensure_diskfree(default 0)
240 sets MB to ensure disk free space. This option means the thresh‐
241 old of free space size on disk which is used for the cache file
242 by s3fs. s3fs makes file for downloading, and uploading and
243 caching files. If the disk free space is smaller than this
244 value, s3fs do not use diskspace as possible in exchange for the
245 performance.
246
247 -o url (default="https://s3.amazonaws.com")
248 sets the url to use to access Amazon S3. If you want to use
249 HTTP, then you can set "url=http://s3.amazonaws.com". If you do
250 not use https, please specify the URL with the url option.
251
252 -o endpoint (default="us-east-1")
253 sets the endpoint to use. If this option is not specified, s3fs
254 uses "us-east-1" region as the default. If the s3fs could not
255 connect to the region specified by this option, s3fs could not
256 run. But if you do not specify this option, and if you can not
257 connect with the default region, s3fs will retry to automati‐
258 cally connect to the other region. So s3fs can know the correct
259 region name, because s3fs can find it in an error from the S3
260 server.
261
262 -o sigv2 (default is signature version 4)
263 sets signing AWS requests by using Signature Version 2.
264
265 -o mp_umask (default is "0000")
266 sets umask for the mount point directory. If allow_other option
267 is not set, s3fs allows access to the mount point only to the
268 owner. In the opposite case s3fs allows access to all users as
269 the default. But if you set the allow_other with this option,
270 you can control permissions of the mount point by this option
271 like umask.
272
273 -o nomultipart - disable multipart uploads
274
275 -o enable_content_md5 ( default is disable )
276 Allow S3 server to check data integrity of uploads via the Con‐
277 tent-MD5 header. This can add CPU overhead to transfers.
278
279 -o ecs ( default is disable )
280 This option instructs s3fs to query the ECS container credential
281 metadata address instead of the instance metadata address.
282
283 -o iam_role ( default is no IAM role )
284 This option requires the IAM role name or "auto". If you specify
285 "auto", s3fs will automatically use the IAM role names that are
286 set to an instance. If you specify this option without any argu‐
287 ment, it is the same as that you have specified the "auto".
288
289 -o ibm_iam_auth ( default is not using IBM IAM authentication )
290 This option instructs s3fs to use IBM IAM authentication. In
291 this mode, the AWSAccessKey and AWSSecretKey will be used as
292 IBM's Service-Instance-ID and APIKey, respectively.
293
294 -o ibm_iam_endpoint ( default is https://iam.bluemix.net )
295 Set the URL to use for IBM IAM authentication.
296
297 -o use_xattr ( default is not handling the extended attribute )
298 Enable to handle the extended attribute(xattrs). If you set
299 this option, you can use the extended attribute. For example,
300 encfs and ecryptfs need to support the extended attribute.
301 Notice: if s3fs handles the extended attribute, s3fs can not
302 work to copy command with preserve=mode.
303
304 -o noxmlns - disable registering xml name space.
305 disable registering xml name space for response of ListBucketRe‐
306 sult and ListVersionsResult etc. Default name space is looked up
307 from "http://s3.amazonaws.com/doc/2006-03-01". This option
308 should not be specified now, because s3fs looks up xmlns auto‐
309 matically after v1.66.
310
311 -o nocopyapi - for other incomplete compatibility object storage.
312 For a distributed object storage which is compatibility S3 API
313 without PUT(copy api). If you set this option, s3fs do not use
314 PUT with "x-amz-copy-source"(copy api). Because traffic is
315 increased 2-3 times by this option, we do not recommend this.
316
317 -o norenameapi - for other incomplete compatibility object storage.
318 For a distributed object storage which is compatibility S3 API
319 without PUT(copy api). This option is a subset of nocopyapi
320 option. The nocopyapi option does not use copy-api for all com‐
321 mand(ex. chmod, chown, touch, mv, etc), but this option does not
322 use copy-api for only rename command(ex. mv). If this option is
323 specified with nocopyapi, then s3fs ignores it.
324
325 -o use_path_request_style (use legacy API calling style)
326 Enable compatibility with S3-like APIs which do not support the
327 virtual-host request style, by using the older path request
328 style.
329
330 -o noua (suppress User-Agent header)
331 Usually s3fs outputs of the User-Agent in "s3fs/<version> (com‐
332 mit hash <hash>; <using ssl library name>)" format. If this
333 option is specified, s3fs suppresses the output of the User-
334 Agent.
335
336 -o cipher_suites
337 Customize TLS cipher suite list. Expects a colon separated list
338 of cipher suite names. A list of available cipher suites,
339 depending on your TLS engine, can be found on the CURL library
340 documentation: https://curl.haxx.se/docs/ssl-ciphers.html
341
342 -o instance_name
343 The instance name of the current s3fs mountpoint. This name
344 will be added to logging messages and user agent headers sent by
345 s3fs.
346
347 -o complement_stat (complement lack of file/directory mode)
348 s3fs complements lack of information about file/directory mode
349 if a file or a directory object does not have x-amz-meta-mode
350 header. As default, s3fs does not complements stat information
351 for a object, then the object will not be able to be allowed to
352 list/modify.
353
354 -o notsup_compat_dir (not support compatibility directory types)
355 As a default, s3fs supports objects of the directory type as
356 much as possible and recognizes them as directories. Objects
357 that can be recognized as directory objects are "dir/", "dir",
358 "dir_$folder$", and there is a file object that does not have a
359 directory object but contains that directory path. s3fs needs
360 redundant communication to support all these directory types.
361 The object as the directory created by s3fs is "dir/". By
362 restricting s3fs to recognize only "dir/" as a directory, commu‐
363 nication traffic can be reduced. This option is used to give
364 this restriction to s3fs. However, if there is a directory
365 object other than "dir/" in the bucket, specifying this option
366 is not recommended. s3fs may not be able to recognize the
367 object correctly if an object created by s3fs exists in the
368 bucket. Please use this option when the directory in the bucket
369 is only "dir/" object.
370
371 -o use_wtf8 - support arbitrary file system encoding.
372 S3 requires all object names to be valid utf-8. But some
373 clients, notably Windows NFS clients, use their own encoding.
374 This option re-encodes invalid utf-8 object names into valid
375 utf-8 by mapping offending codes into a 'private' codepage of
376 the Unicode set. Useful on clients not using utf-8 as their
377 file system encoding.
378
379 -o dbglevel (default="crit")
380 Set the debug message level. set value as crit(critical),
381 err(error), warn(warning), info(information) to debug level.
382 default debug level is critical. If s3fs run with "-d" option,
383 the debug level is set information. When s3fs catch the signal
384 SIGUSR2, the debug level is bumpup.
385
386 -o curldbg - put curl debug message
387 Put the debug message from libcurl when this option is speci‐
388 fied.
389
390 utility mode options
391 -u or --incomplete-mpu-list
392 Lists multipart incomplete objects uploaded to the specified
393 bucket.
394
395 --incomplete-mpu-abort all or date format(default="24H")
396 Delete the multipart incomplete object uploaded to the specified
397 bucket. If "all" is specified for this option, all multipart
398 incomplete objects will be deleted. If you specify no argument
399 as an option, objects older than 24 hours(24H) will be
400 deleted(This is the default value). You can specify an optional
401 date format. It can be specified as year, month, day, hour,
402 minute, second, and it is expressed as "Y", "M", "D", "h", "m",
403 "s" respectively. For example, "1Y6M10D12h30m30s".
404
406 Most of the generic mount options described in 'man mount' are sup‐
407 ported (ro, rw, suid, nosuid, dev, nodev, exec, noexec, atime, noatime,
408 sync async, dirsync). Filesystems are mounted with '-onodev,nosuid' by
409 default, which can only be overridden by a privileged user.
410
411 There are many FUSE specific mount options that can be specified. e.g.
412 allow_other. See the FUSE README for the full set.
413
415 The maximum size of objects that s3fs can handle depends on Amazon S3.
416 For example, up to 5 GB when using single PUT API. And up to 5 TB is
417 supported when Multipart Upload API is used.
418
419 If enabled via the "use_cache" option, s3fs automatically maintains a
420 local cache of files in the folder specified by use_cache. Whenever
421 s3fs needs to read or write a file on S3, it first downloads the entire
422 file locally to the folder specified by use_cache and operates on it.
423 When fuse_release() is called, s3fs will re-upload the file to S3 if it
424 has been changed. s3fs uses md5 checksums to minimize downloads from
425 S3.
426
427 The folder specified by use_cache is just a local cache. It can be
428 deleted at any time. s3fs rebuilds it on demand.
429
430 Local file caching works by calculating and comparing md5 checksums
431 (ETag HTTP header).
432
433 s3fs leverages /etc/mime.types to "guess" the "correct" content-type
434 based on file name extension. This means that you can copy a website to
435 S3 and serve it up directly from S3 with correct content-types!
436
438 Due to S3's "eventual consistency" limitations, file creation can and
439 will occasionally fail. Even after a successful create, subsequent
440 reads can fail for an indeterminate time, even after one or more suc‐
441 cessful reads. Create and read enough files and you will eventually
442 encounter this failure. This is not a flaw in s3fs and it is not some‐
443 thing a FUSE wrapper like s3fs can work around. The retries option does
444 not address this issue. Your application must either tolerate or com‐
445 pensate for these failures, for example by retrying creates or reads.
446
448 s3fs has been written by Randy Rizun <rrizun@gmail.com>.
449
450
451
452S3FS February 2011 S3FS(1)