1S3FS(1)                          User Commands                         S3FS(1)
2
3
4

NAME

6       S3FS - FUSE-based file system backed by Amazon S3
7

SYNOPSIS

9   mounting
10       s3fs bucket[:/path] mountpoint  [options]
11
12       s3fs mountpoint  [options (must specify bucket= option)]
13
14   unmounting
15       umount mountpoint
16              For root.
17
18       fusermount -u mountpoint
19              For unprivileged user.
20
21   utility mode (remove interrupted multipart uploading objects)
22       s3fs --incomplete-mpu-list (-u) bucket
23
24       s3fs --incomplete-mpu-abort[=all | =<expire date format>] bucket
25

DESCRIPTION

27       s3fs  is a FUSE filesystem that allows you to mount an Amazon S3 bucket
28       as a local filesystem. It stores files natively and transparently in S3
29       (i.e., you can use other programs to access the same files).
30

AUTHENTICATION

32       s3fs  supports the standard AWS credentials file (https://docs.aws.ama
33       zon.com/cli/latest/userguide/cli-config-files.html)      stored      in
34       `${HOME}/.aws/credentials`.   Alternatively,  s3fs  supports  a  custom
35       passwd file. Only AWS credentials file format can be used when AWS ses‐
36       sion  token  is  required.  The s3fs password file has this format (use
37       this format if you have only one set of credentials):
38           accessKeyId:secretAccessKey
39
40       If you have more than one set of credentials, this syntax is also  rec‐
41       ognized:
42           bucketName:accessKeyId:secretAccessKey
43
44       Password files can be stored in two locations:
45            /etc/passwd-s3fs     [0640]
46            $HOME/.passwd-s3fs   [0600]
47
48       s3fs also recognizes the AWSACCESSKEYID and AWSSECRETACCESSKEY environ‐
49       ment variables.
50

OPTIONS

52   general options
53       -h   --help
54              print help
55
56            --version
57              print version
58
59       -f     FUSE foreground option - do not run as daemon.
60
61       -s     FUSE single-threaded option (disables multi-threaded operation)
62
63   mount options
64       All s3fs options must given in the form where "opt" is:
65               <option_name>=<option_value>
66
67       -o bucket
68              if it is not specified bucket name (and path) in  command  line,
69              must specify this option after -o option for bucket name.
70
71       -o default_acl (default="private")
72              the default canned acl to apply to all written s3 objects, e.g.,
73              "private", "public-read".  see  https://docs.aws.amazon.com/Ama
74              zonS3/latest/dev/acl-overview.html#canned-acl  for the full list
75              of canned acls.
76
77       -o retries (default="5")
78              number of times to retry a failed S3 transaction.
79
80       -o use_cache (default="" which means disabled)
81              local folder to use for local file cache.
82
83       -o check_cache_dir_exist (default is disable)
84              If use_cache is set, check if the cache  directory  exists.   If
85              this option is not specified, it will be created at runtime when
86              the cache directory does not exist.
87
88       -o del_cache - delete local file cache
89              delete local file cache when s3fs starts and exits.
90
91       -o storage_class (default="standard")
92              store object with specified  storage  class.   Possible  values:
93              standard,  standard_ia, onezone_ia, reduced_redundancy, intelli‐
94              gent_tiering, glacier, and deep_archive.
95
96       -o use_rrs (default is disable)
97              use Amazon's Reduced Redundancy Storage.  this option can not be
98              specified with use_sse.  (can specify use_rrs=1 for old version)
99              this option has been replaced by new storage_class option.
100
101       -o use_sse (default is disable)
102              Specify three type Amazon's Server-Site Encryption: SSE-S3, SSE-
103              C  or  SSE-KMS.  SSE-S3  uses Amazon S3-managed encryption keys,
104              SSE-C uses customer-provided encryption keys, and  SSE-KMS  uses
105              the  master  key  which  you manage in AWS KMS.  You can specify
106              "use_sse" or "use_sse=1" enables SSE-S3 type (use_sse=1  is  old
107              type  parameter).   Case  of  setting  SSE-C,  you  can  specify
108              "use_sse=custom", "use_sse=custom:<custom  key  file  path>"  or
109              "use_sse=<custom  key  file  path>" (only <custom key file path>
110              specified is old type parameter).  You can  use  "c"  for  short
111              "custom".   The custom key file must be 600 permission. The file
112              can have some lines, each line is one SSE-C key.  The first line
113              in file is used as Customer-Provided Encryption Keys for upload‐
114              ing and changing headers etc.  If  there  are  some  keys  after
115              first   line,  those  are  used  downloading  object  which  are
116              encrypted by not first key.  So that, you  can  keep  all  SSE-C
117              keys  in  file, that is SSE-C key history.  If you specify "cus‐
118              tom" ("c") without file path, you need  to  set  custom  key  by
119              load_sse_c option or AWSSSECKEYS environment. (AWSSSECKEYS envi‐
120              ronment has some SSE-C keys with ":" separator.)  This option is
121              used  to  decide  the  SSE  type.  So that if you do not want to
122              encrypt a object at uploading, but you need to decrypt encrypted
123              object  at downloading, you can use load_sse_c option instead of
124              this option.  For setting SSE-KMS,  specify  "use_sse=kmsid"  or
125              "use_sse=kmsid:<kms  id>".   You  can use "k" for short "kmsid".
126              If you san specify SSE-KMS type with your <kms id> in  AWS  KMS,
127              you  can  set  it after "kmsid:" (or "k:").  If you specify only
128              "kmsid" ("k"), you need to  set  AWSSSEKMSID  environment  which
129              value  is  <kms id>.  You must be careful about that you can not
130              use the KMS id which is not same EC2 region.
131
132       -o load_sse_c - specify SSE-C keys
133              Specify  the  custom-provided  encryption  keys  file  path  for
134              decrypting  at  downloading.   If  you  use  the custom-provided
135              encryption key at uploading, you specify with  "use_sse=custom".
136              The file has many lines, one line means one custom key.  So that
137              you can keep all SSE-C keys in file, that is SSE-C key  history.
138              AWSSSECKEYS environment is as same as this file contents.
139
140       -o passwd_file (default="")
141              specify  the path to the password file, which which takes prece‐
142              dence over the password in $HOME/.passwd-s3fs  and  /etc/passwd-
143              s3fs
144
145       -o ahbe_conf (default="" which means disabled)
146              This  option specifies the configuration file path which file is
147              the additional HTTP header by file (object) extension.
148               The configuration file format is below:
149               -----------
150               line         = [file suffix or regex] HTTP-header [HTTP-values]
151               file suffix  = file (object) suffix, if this field is empty, it
152              means "reg:(.*)".(=all object).
153               regex         =  regular  expression to match the file (object)
154              path. this type starts with "reg:" prefix.
155               HTTP-header  = additional HTTP header name
156               HTTP-values  = additional HTTP header value
157               -----------
158               Sample:
159               -----------
160               .gz                    Content-Encoding  gzip
161               .Z                     Content-Encoding  compress
162               reg:^/MYDIR/(.*)[.]t2$ Content-Encoding  text2
163               -----------
164               A sample configuration file is uploaded  in  "test"  directory.
165              If  you  specify  this  option  for  set "Content-Encoding" HTTP
166              header, please take care for RFC 2616.
167
168       -o profile (default="default")
169              Choose a profile from ${HOME}/.aws/credentials  to  authenticate
170              against  S3.   Note  that this format matches the AWS CLI format
171              and differs from the s3fs passwd format.
172
173       -o public_bucket (default="" which means disabled)
174              anonymously mount a public bucket when set  to  1,  ignores  the
175              $HOME/.passwd-s3fs  and  /etc/passwd-s3fs  files.   S3  does not
176              allow copy object  api  for  anonymous  users,  then  s3fs  sets
177              nocopyapi  option  automatically  when public_bucket=1 option is
178              specified.
179
180       -o connect_timeout (default="300" seconds)
181              time to wait for connection before giving up.
182
183       -o readwrite_timeout (default="120" seconds)
184              time to wait between read/write activity before giving up.
185
186       -o list_object_max_keys (default="1000")
187              specify the maximum number of keys returned by  S3  list  object
188              API.  The  default  is  1000.  you can set this value to 1000 or
189              more.
190
191       -o max_stat_cache_size (default="100,000" entries (about 40MB))
192              maximum number of entries in the stat cache  and  symbolic  link
193              cache.
194
195       -o stat_cache_expire (default is 900)
196              specify  expire time (seconds) for entries in the stat cache and
197              symbolic link cache. This expire time indicates the  time  since
198              cached.
199
200       -o stat_cache_interval_expire (default is 900)
201              specify  expire time (seconds) for entries in the stat cache and
202              symbolic link cache. This expire time is based on the time  from
203              the  last  access time of those cache.  This option is exclusive
204              with stat_cache_expire, and is left for compatibility with older
205              versions.
206
207       -o enable_noobj_cache (default is disable)
208              enable  cache entries for the object which does not exist.  s3fs
209              always has to check whether file (or sub directory) exists under
210              object (path) when s3fs does some command, since s3fs has recog‐
211              nized a directory which does not exist  and  has  files  or  sub
212              directories  under  itself.  It increases ListBucket request and
213              makes performance bad.  You can specify this option for  perfor‐
214              mance,  s3fs  memorizes  in  stat cache that the object (file or
215              directory) does not exist.
216
217       -o no_check_certificate (by default this option is disabled)
218              server certificate won't be checked against the  available  cer‐
219              tificate authorities.
220
221       -o ssl_verify_hostname (default="2")
222              When 0, do not verify the SSL certificate against the hostname.
223
224       -o nodnscache - disable DNS cache.
225              s3fs  is always using DNS cache, this option make DNS cache dis‐
226              able.
227
228       -o nosscache - disable SSL session cache.
229              s3fs is always using SSL session cache,  this  option  make  SSL
230              session cache disable.
231
232       -o multireq_max (default="20")
233              maximum number of parallel request for listing objects.
234
235       -o parallel_count (default="5")
236              number  of  parallel  request  for  uploading big objects.  s3fs
237              uploads large object (over 20MB) by multipart post request,  and
238              sends  parallel  requests.   This option limits parallel request
239              count which s3fs requests at once.  It is necessary to set  this
240              value depending on a CPU and a network band.
241
242       -o multipart_size (default="10")
243              part size, in MB, for each multipart request.  The minimum value
244              is 5 MB and the maximum value is 5 GB.
245
246       -o multipart_copy_size (default="512")
247              part size, in MB, for each  multipart  copy  request,  used  for
248              renames  and mixupload.  The minimum value is 5 MB and the maxi‐
249              mum value is 5 GB.  Must be at least 512 MB to copy the  maximum
250              5 TB object size but lower values may improve performance.
251
252       -o max_dirty_data (default="5120")
253              Flush  dirty  data  to  S3 after a certain number of MB written.
254              The minimum value is 50 MB.  -1 value means disable.  Cannot  be
255              used with nomixupload.
256
257       -o ensure_diskfree (default 0)
258              sets MB to ensure disk free space. This option means the thresh‐
259              old of free space size on disk which is used for the cache  file
260              by s3fs.  s3fs makes file for downloading, uploading and caching
261              files.  If the disk free space is smaller than this value,  s3fs
262              do  not  use  diskspace  as possible in exchange for the perfor‐
263              mance.
264
265       -o multipart_threshold (default="25")
266              threshold, in MB, to use multipart  upload  instead  of  single-
267              part.  Must be at least 5 MB.
268
269       -o singlepart_copy_limit (default="512")
270              maximum  size, in MB, of a single-part copy before trying multi‐
271              part copy.
272
273       -o host (default="https://s3.amazonaws.com")
274              Set a non-Amazon host, e.g., https://example.com.
275
276       -o servicepath (default="/")
277              Set a service path when the non-Amazon host requires a prefix.
278
279       -o url (default="https://s3.amazonaws.com")
280              sets the url to use to access Amazon S3.  If  you  want  to  use
281              HTTP, then you can set "url=http://s3.amazonaws.com".  If you do
282              not use https, please specify the URL with the url option.
283
284       -o endpoint (default="us-east-1")
285              sets the endpoint to use on signature version 4.  If this option
286              is  not  specified, s3fs uses "us-east-1" region as the default.
287              If the s3fs could not connect to the region  specified  by  this
288              option,  s3fs  could  not  run.   But if you do not specify this
289              option, and if you can not connect with the default region, s3fs
290              will  retry  to  automatically  connect to the other region.  So
291              s3fs can know the correct region name, because s3fs can find  it
292              in an error from the S3 server.
293
294       -o sigv2 (default is signature version 4 falling back to version 2)
295              sets signing AWS requests by using only signature version 2.
296
297       -o sigv4 (default is signature version 4 falling back to version 2)
298              sets signing AWS requests by using only signature version 4.
299
300       -o mp_umask (default is "0000")
301              sets umask for the mount point directory.  If allow_other option
302              is not set, s3fs allows access to the mount point  only  to  the
303              owner.   In the opposite case s3fs allows access to all users as
304              the default.  But if you set the allow_other with  this  option,
305              you  can  control  the  permissions  of  the mount point by this
306              option like umask.
307
308       -o umask (default is "0000")
309              sets umask for files under the mountpoint.  This can allow users
310              other  than  the  mounting  user to read and write to files that
311              they did not create.
312
313       -o nomultipart - disable multipart uploads
314
315       -o enable_content_md5 (default is disable)
316              Allow S3 server to check data integrity of uploads via the  Con‐
317              tent-MD5 header.  This can add CPU overhead to transfers.
318
319       -o ecs (default is disable)
320              This option instructs s3fs to query the ECS container credential
321              metadata address instead of the instance metadata address.
322
323       -o iam_role (default is no IAM role)
324              This option requires the IAM role name or "auto". If you specify
325              "auto",  s3fs will automatically use the IAM role names that are
326              set to an instance. If you specify this option without any argu‐
327              ment, it is the same as that you have specified the "auto".
328
329       -o imdsv1only (default is to use IMDSv2 with fallback to v1)
330              AWS  instance  metadata  service, used with IAM role authentica‐
331              tion, supports the use of an API token.  If you're using an  IAM
332              role  in  an  environment  that does not support IMDSv2, setting
333              this flag will skip retrieval and usage of the  API  token  when
334              retrieving IAM credentials.
335
336              -o  ibm_iam_auth  (default  is not using IBM IAM authentication)
337              This option instructs s3fs to use  IBM  IAM  authentication.  In
338              this  mode,  the  AWSAccessKey  and AWSSecretKey will be used as
339              IBM's Service-Instance-ID and APIKey, respectively.
340
341       -o ibm_iam_endpoint (default is https://iam.bluemix.net)
342              Sets the URL to use for IBM IAM authentication.
343
344       -o use_xattr (default is not handling the extended attribute)
345              Enable to handle the extended attribute (xattrs).   If  you  set
346              this  option,  you can use the extended attribute.  For example,
347              encfs and ecryptfs  need  to  support  the  extended  attribute.
348              Notice:  if  s3fs  handles  the extended attribute, s3fs can not
349              work to copy command with preserve=mode.
350
351       -o noxmlns - disable registering xml name space.
352              disable registering xml name space for response of ListBucketRe‐
353              sult and ListVersionsResult etc. Default name space is looked up
354              from  "http://s3.amazonaws.com/doc/2006-03-01".    This   option
355              should  not  be specified now, because s3fs looks up xmlns auto‐
356              matically after v1.66.
357
358       -o nomixupload - disable copy in multipart uploads.
359              Disable to use PUT (copy api)  when  multipart  uploading  large
360              size  objects.   By  default,  when  doing multipart upload, the
361              range of unchanged data will use PUT (copy api) whenever  possi‐
362              ble.   When  nocopyapi  or  norenameapi is specified, use of PUT
363              (copy api) is invalidated even if this option is not specified.
364
365       -o nocopyapi - for other incomplete compatibility object storage.
366              For a distributed object storage which is compatibility  S3  API
367              without PUT (copy api).  If you set this option, s3fs do not use
368              PUT with "x-amz-copy-source"  (copy  api).  Because  traffic  is
369              increased 2-3 times by this option, we do not recommend this.
370
371       -o norenameapi - for other incomplete compatibility object storage.
372              For  a  distributed object storage which is compatibility S3 API
373              without PUT (copy api).  This option is a  subset  of  nocopyapi
374              option.  The nocopyapi option does not use copy-api for all com‐
375              mand (ex. chmod, chown, touch, mv, etc), but  this  option  does
376              not  use  copy-api  for  only  rename command (ex. mv).  If this
377              option is specified with nocopyapi, then s3fs ignores it.
378
379       -o use_path_request_style (use legacy API calling style)
380              Enable compatibility with S3-like APIs which do not support  the
381              virtual-host  request  style,  by  using  the older path request
382              style.
383
384       -o listobjectsv2 (use ListObjectsV2)
385              Issue ListObjectsV2 instead of  ListObjects,  useful  on  object
386              stores without ListObjects support.
387
388       -o noua (suppress User-Agent header)
389              Usually  s3fs outputs of the User-Agent in "s3fs/<version> (com‐
390              mit hash <hash>; <using ssl library  name>)"  format.   If  this
391              option  is  specified,  s3fs  suppresses the output of the User-
392              Agent.
393
394       -o cipher_suites
395              Customize the list of TLS cipher suites. Expects a  colon  sepa‐
396              rated  list  of  cipher suite names.  A list of available cipher
397              suites, depending on your TLS engine, can be found on  the  CURL
398              library       documentation:      https://curl.haxx.se/docs/ssl-
399              ciphers.html
400
401       -o instance_name
402              The instance name of the current  s3fs  mountpoint.   This  name
403              will be added to logging messages and user agent headers sent by
404              s3fs.
405
406       -o complement_stat (complement lack of file/directory mode)
407              s3fs complements lack of information about  file/directory  mode
408              if  a  file  or a directory object does not have x-amz-meta-mode
409              header.  As default, s3fs does not complements stat  information
410              for  a object, then the object will not be able to be allowed to
411              list/modify.
412
413       -o notsup_compat_dir (not support compatibility directory types)
414              As a default, s3fs supports objects of  the  directory  type  as
415              much  as  possible  and recognizes them as directories.  Objects
416              that can be recognized as directory objects are  "dir/",  "dir",
417              "dir_$folder$",  and there is a file object that does not have a
418              directory object but contains that directory path.   s3fs  needs
419              redundant  communication  to  support all these directory types.
420              The object as the directory  created  by  s3fs  is  "dir/".   By
421              restricting s3fs to recognize only "dir/" as a directory, commu‐
422              nication traffic can be reduced.  This option is  used  to  give
423              this  restriction  to  s3fs.   However,  if there is a directory
424              object other than "dir/" in the bucket, specifying  this  option
425              is  not  recommended.   s3fs  may  not  be able to recognize the
426              object correctly if an object created  by  s3fs  exists  in  the
427              bucket.  Please use this option when the directory in the bucket
428              is only "dir/" object.
429
430       -o use_wtf8 - support arbitrary file system encoding.
431              S3 requires all  object  names  to  be  valid  UTF-8.  But  some
432              clients,  notably  Windows  NFS clients, use their own encoding.
433              This option re-encodes invalid UTF-8  object  names  into  valid
434              UTF-8  by  mapping  offending codes into a 'private' codepage of
435              the Unicode set.  Useful on clients not  using  UTF-8  as  their
436              file system encoding.
437
438       -o use_session_token - indicate that session token should be provided.
439              If credentials are provided by environment variables this switch
440              forces presence check of AWSSESSIONTOKEN variable.  Otherwise an
441              error is returned.
442
443       -o requester_pays (default is disable)
444              This   option   instructs  s3fs  to  enable  requests  involving
445              Requester  Pays  buckets  (It   includes   the   'x-amz-request-
446              payer=requester' entry in the request header).
447
448       -o mime (default is "/etc/mime.types")
449              Specify  the path of the mime.types file.  If this option is not
450              specified, the existence of "/etc/mime.types"  is  checked,  and
451              that  file is loaded as mime information.  If this file does not
452              exist on macOS, then  "/etc/apache2/mime.types"  is  checked  as
453              well.
454
455       -o logfile - specify the log output file.
456              s3fs  outputs  the log file to syslog. Alternatively, if s3fs is
457              started with the "-f" option specified, the log will  be  output
458              to  the  stdout/stderr.   You can use this option to specify the
459              log file that s3fs outputs.  If you specify a log file with this
460              option,  it will reopen the log file when s3fs receives a SIGHUP
461              signal. You can use the SIGHUP signal for log rotation.
462
463       -o dbglevel (default="crit")
464              Set the debug message level. set value as crit  (critical),  err
465              (error),  warn  (warning),  info  (information)  to debug level.
466              default debug level is critical.  If s3fs run with "-d"  option,
467              the  debug level is set information.  When s3fs catch the signal
468              SIGUSR2, the debug level is bumpup.
469
470       -o curldbg - put curl debug message
471              Put the debug message from libcurl when this  option  is  speci‐
472              fied.   Specify  "normal"  or  "body" for the parameter.  If the
473              parameter is omitted, it is the same as "normal".  If "body"  is
474              specified,  some  API  communication body data will be output in
475              addition to the debug message output as "normal".
476
477       -o no_time_stamp_msg - no time stamp in debug message
478              The time stamp is output to the debug message  by  default.   If
479              this  option  is specified, the time stamp will not be output in
480              the debug message.  It is the same even if the environment vari‐
481              able "S3FS_MSGTIMESTAMP" is set to "no".
482
483       -o set_check_cache_sigusr1 (default is stdout)
484              If  the  cache  is  enabled,  you can check the integrity of the
485              cache file and the cache file's stats info file.  This option is
486              specified  and  when  sending  the  SIGUSR1  signal  to the s3fs
487              process checks the cache status at that time.  This  option  can
488              take a file path as parameter to output the check result to that
489              file.  The file path parameter can be omitted. If  omitted,  the
490              result will be output to stdout or syslog.
491
492   utility mode options
493       -u or --incomplete-mpu-list
494              Lists  multipart  incomplete  objects  uploaded to the specified
495              bucket.
496
497       --incomplete-mpu-abort all or date format (default="24H")
498              Delete the multipart incomplete object uploaded to the specified
499              bucket.   If  "all"  is specified for this option, all multipart
500              incomplete objects will be deleted.  If you specify no  argument
501              as  an option, objects older than 24 hours (24H) will be deleted
502              (This is the default value).  You can specify an  optional  date
503              format.   It can be specified as year, month, day, hour, minute,
504              second, and it is expressed as "Y",  "M",  "D",  "h",  "m",  "s"
505              respectively.  For example, "1Y6M10D12h30m30s".
506

FUSE/MOUNT OPTIONS

508       Most  of  the  generic  mount options described in 'man mount' are sup‐
509       ported (ro, rw, suid, nosuid, dev, nodev, exec, noexec, atime, noatime,
510       sync async, dirsync).  Filesystems are mounted with '-onodev,nosuid' by
511       default, which can only be overridden by a privileged user.
512
513       There are many FUSE specific mount options that can be specified.  e.g.
514       allow_other. See the FUSE README for the full set.
515

NOTES

517       The  maximum size of objects that s3fs can handle depends on Amazon S3.
518       For example, up to 5 GB when using single PUT API. And up to  5  TB  is
519       supported when Multipart Upload API is used.
520
521       If  enabled  via the "use_cache" option, s3fs automatically maintains a
522       local cache of files in the folder  specified  by  use_cache.  Whenever
523       s3fs needs to read or write a file on S3, it first downloads the entire
524       file locally to the folder specified by use_cache and operates  on  it.
525       When fuse_release() is called, s3fs will re-upload the file to S3 if it
526       has been changed. s3fs uses MD5 checksums to  minimize  downloads  from
527       S3.
528
529       The  folder  specified  by  use_cache  is just a local cache. It can be
530       deleted at any time. s3fs rebuilds it on demand.
531
532       Local file caching works by calculating  and  comparing  MD5  checksums
533       (ETag HTTP header).
534
535       s3fs  leverages  /etc/mime.types  to "guess" the "correct" content-type
536       based on file name extension. This means that you can copy a website to
537       S3 and serve it up directly from S3 with correct content-types!
538

SEE ALSO

540       fuse(8), mount(8), fusermount(1), fstab(5)
541

BUGS

543       Due  to  S3's "eventual consistency" limitations, file creation can and
544       will occasionally fail. Even  after  a  successful  create,  subsequent
545       reads  can  fail for an indeterminate time, even after one or more suc‐
546       cessful reads. Create and read enough files  and  you  will  eventually
547       encounter  this failure. This is not a flaw in s3fs and it is not some‐
548       thing a FUSE wrapper like s3fs can work around. The retries option does
549       not  address  this issue. Your application must either tolerate or com‐
550       pensate for these failures, for example by retrying creates or reads.
551

AUTHOR

553       s3fs has been written by Randy Rizun <rrizun@gmail.com>.
554
555
556
557S3FS                             February 2011                         S3FS(1)
Impressum