1S3FS(1)                          User Commands                         S3FS(1)
2
3
4

NAME

6       S3FS - FUSE-based file system backed by Amazon S3
7

SYNOPSIS

9   mounting
10       s3fs bucket[:/path] mountpoint  [options]
11
12       s3fs mountpoint  [options (must specify bucket= option)]
13
14   unmounting
15       umount mountpoint
16              For root.
17
18       fusermount -u mountpoint
19              For unprivileged user.
20
21   utility mode (remove interrupted multipart uploading objects)
22       s3fs --incomplete-mpu-list (-u) bucket
23
24       s3fs --incomplete-mpu-abort[=all | =<expire date format>] bucket
25

DESCRIPTION

27       s3fs  is a FUSE filesystem that allows you to mount an Amazon S3 bucket
28       as a local filesystem. It stores files natively and transparently in S3
29       (i.e., you can use other programs to access the same files).
30

AUTHENTICATION

32       s3fs  supports the standard AWS credentials file (https://docs.aws.ama
33       zon.com/cli/latest/userguide/cli-config-files.html)      stored      in
34       `${HOME}/.aws/credentials`.   Alternatively,  s3fs  supports  a  custom
35       passwd file. Only AWS credentials file format can be used when AWS ses‐
36       sion  token  is  required.  The s3fs password file has this format (use
37       this format if you have only one set of credentials):
38           accessKeyId:secretAccessKey
39
40       If you have more than one set of credentials, this syntax is also  rec‐
41       ognized:
42           bucketName:accessKeyId:secretAccessKey
43
44       Password files can be stored in two locations:
45            /etc/passwd-s3fs     [0640]
46            $HOME/.passwd-s3fs   [0600]
47
48       s3fs  also  recognizes  the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
49       environment variables.
50

OPTIONS

52   general options
53       -h   --help
54              print help
55
56            --version
57              print version
58
59       -f     FUSE foreground option - do not run as daemon.
60
61       -s     FUSE single-threaded option (disables multi-threaded operation)
62
63   mount options
64       All s3fs options must given in the form where "opt" is:
65               <option_name>=<option_value>
66
67       -o bucket
68              if it is not specified bucket name (and path) in  command  line,
69              must specify this option after -o option for bucket name.
70
71       -o default_acl (default="private")
72              the default canned acl to apply to all written s3 objects, e.g.,
73              "private", "public-read".  see  https://docs.aws.amazon.com/Ama
74              zonS3/latest/dev/acl-overview.html#canned-acl  for the full list
75              of canned ACLs.
76
77       -o retries (default="5")
78              number of times to retry a failed S3 transaction.
79
80       -o tmpdir (default="/tmp")
81              local folder for temporary files.
82
83       -o use_cache (default="" which means disabled)
84              local folder to use for local file cache.
85
86       -o check_cache_dir_exist (default is disable)
87              If use_cache is set, check if the cache  directory  exists.   If
88              this option is not specified, it will be created at runtime when
89              the cache directory does not exist.
90
91       -o del_cache - delete local file cache
92              delete local file cache when s3fs starts and exits.
93
94       -o storage_class (default="standard")
95              store object with specified  storage  class.   Possible  values:
96              standard,  standard_ia, onezone_ia, reduced_redundancy, intelli‐
97              gent_tiering, glacier, glacier_ir, and deep_archive.
98
99       -o use_rrs (default is disable)
100              use Amazon's Reduced Redundancy Storage.  this option can not be
101              specified with use_sse.  (can specify use_rrs=1 for old version)
102              this option has been replaced by new storage_class option.
103
104       -o use_sse (default is disable)
105              Specify three type Amazon's Server-Site Encryption: SSE-S3, SSE-
106              C  or  SSE-KMS.  SSE-S3  uses Amazon S3-managed encryption keys,
107              SSE-C uses customer-provided encryption keys, and  SSE-KMS  uses
108              the  master  key  which  you manage in AWS KMS.  You can specify
109              "use_sse" or "use_sse=1" enables SSE-S3 type (use_sse=1  is  old
110              type  parameter).   Case  of  setting  SSE-C,  you  can  specify
111              "use_sse=custom", "use_sse=custom:<custom  key  file  path>"  or
112              "use_sse=<custom  key  file  path>" (only <custom key file path>
113              specified is old type parameter).  You can  use  "c"  for  short
114              "custom".   The custom key file must be 600 permission. The file
115              can have some lines, each line is one SSE-C key.  The first line
116              in file is used as Customer-Provided Encryption Keys for upload‐
117              ing and changing headers etc.  If  there  are  some  keys  after
118              first  line,  those  are  used  downloading object which are en‐
119              crypted by not first key.  So that, you can keep all SSE-C  keys
120              in  file,  that  is  SSE-C key history.  If you specify "custom"
121              ("c")  without  file  path,  you  need  to  set  custom  key  by
122              load_sse_c option or AWSSSECKEYS environment. (AWSSSECKEYS envi‐
123              ronment has some SSE-C keys with ":" separator.)  This option is
124              used  to decide the SSE type.  So that if you do not want to en‐
125              crypt a object at uploading, but you need to  decrypt  encrypted
126              object  at downloading, you can use load_sse_c option instead of
127              this option.  For setting SSE-KMS,  specify  "use_sse=kmsid"  or
128              "use_sse=kmsid:<kms  id>".   You  can use "k" for short "kmsid".
129              If you san specify SSE-KMS type with your <kms id> in  AWS  KMS,
130              you  can  set  it after "kmsid:" (or "k:").  If you specify only
131              "kmsid" ("k"), you need to  set  AWSSSEKMSID  environment  which
132              value  is  <kms id>.  You must be careful about that you can not
133              use the KMS id which is not same EC2 region.
134
135       -o load_sse_c - specify SSE-C keys
136              Specify the custom-provided encryption keys file  path  for  de‐
137              crypting at downloading.  If you use the custom-provided encryp‐
138              tion key at uploading, you specify with  "use_sse=custom".   The
139              file has many lines, one line means one custom key.  So that you
140              can keep all SSE-C keys in file,  that  is  SSE-C  key  history.
141              AWSSSECKEYS environment is as same as this file contents.
142
143       -o passwd_file (default="")
144              specify  the path to the password file, which which takes prece‐
145              dence over the password in $HOME/.passwd-s3fs  and  /etc/passwd-
146              s3fs
147
148       -o ahbe_conf (default="" which means disabled)
149              This  option specifies the configuration file path which file is
150              the additional HTTP header by file (object) extension.
151               The configuration file format is below:
152               -----------
153               line         = [file suffix or regex] HTTP-header [HTTP-values]
154               file suffix  = file (object) suffix, if this field is empty, it
155              means "reg:(.*)".(=all object).
156               regex         =  regular  expression to match the file (object)
157              path. this type starts with "reg:" prefix.
158               HTTP-header  = additional HTTP header name
159               HTTP-values  = additional HTTP header value
160               -----------
161               Sample:
162               -----------
163               .gz                    Content-Encoding  gzip
164               .Z                     Content-Encoding  compress
165               reg:^/MYDIR/(.*)[.]t2$ Content-Encoding  text2
166               -----------
167               A sample configuration file is uploaded  in  "test"  directory.
168              If  you  specify  this  option  for  set "Content-Encoding" HTTP
169              header, please take care for RFC 2616.
170
171       -o profile (default="default")
172              Choose a profile from ${HOME}/.aws/credentials  to  authenticate
173              against  S3.   Note  that this format matches the AWS CLI format
174              and differs from the s3fs passwd format.
175
176       -o public_bucket (default="" which means disabled)
177              anonymously mount a public bucket when set  to  1,  ignores  the
178              $HOME/.passwd-s3fs  and /etc/passwd-s3fs files.  S3 does not al‐
179              low copy object api for anonymous users, then s3fs sets  nocopy‐
180              api  option  automatically when public_bucket=1 option is speci‐
181              fied.
182
183       -o connect_timeout (default="300" seconds)
184              time to wait for connection before giving up.
185
186       -o readwrite_timeout (default="120" seconds)
187              time to wait between read/write activity before giving up.
188
189       -o list_object_max_keys (default="1000")
190              specify the maximum number of keys returned by  S3  list  object
191              API.  The  default  is  1000.  you can set this value to 1000 or
192              more.
193
194       -o max_stat_cache_size (default="100,000" entries (about 40MB))
195              maximum number of entries in the stat cache  and  symbolic  link
196              cache.
197
198       -o stat_cache_expire (default is 900)
199              specify  expire time (seconds) for entries in the stat cache and
200              symbolic link cache. This expire time indicates the  time  since
201              cached.
202
203       -o stat_cache_interval_expire (default is 900)
204              specify  expire time (seconds) for entries in the stat cache and
205              symbolic link cache. This expire time is based on the time  from
206              the  last  access time of those cache.  This option is exclusive
207              with stat_cache_expire, and is left for compatibility with older
208              versions.
209
210       -o disable_noobj_cache (default is enable)
211              By default s3fs memorizes when an object does not exist up until
212              the stat cache timeout.  This caching can  cause  staleness  for
213              applications.   If  disabled, s3fs will not memorize objects and
214              may cause extra HeadObject requests and reduce performance.
215
216       -o no_check_certificate (by default this option is disabled)
217              server certificate won't be checked against the  available  cer‐
218              tificate authorities.
219
220       -o ssl_verify_hostname (default="2")
221              When 0, do not verify the SSL certificate against the hostname.
222
223       -o nodnscache - disable DNS cache.
224              s3fs  is always using DNS cache, this option make DNS cache dis‐
225              able.
226
227       -o nosscache - disable SSL session cache.
228              s3fs is always using SSL session cache,  this  option  make  SSL
229              session cache disable.
230
231       -o multireq_max (default="20")
232              maximum number of parallel request for listing objects.
233
234       -o parallel_count (default="5")
235              number  of parallel request for uploading big objects.  s3fs up‐
236              loads large object (over 25MB by default) by multipart post  re‐
237              quest, and sends parallel requests.  This option limits parallel
238              request count which s3fs requests at once.  It is  necessary  to
239              set this value depending on a CPU and a network band.
240
241       -o multipart_size (default="10")
242              part size, in MB, for each multipart request.  The minimum value
243              is 5 MB and the maximum value is 5 GB.
244
245       -o multipart_copy_size (default="512")
246              part size, in MB, for each multipart copy request, used for  re‐
247              names  and mixupload.  The minimum value is 5 MB and the maximum
248              value is 5 GB.  Must be at least 512 MB to copy the maximum 5 TB
249              object size but lower values may improve performance.
250
251       -o max_dirty_data (default="5120")
252              Flush  dirty  data  to  S3 after a certain number of MB written.
253              The minimum value is 50 MB. -1 value means disable.   Cannot  be
254              used with nomixupload.
255
256       -o bucket_size (default=maximum long unsigned integer value)
257              The  size of the bucket with which the corresponding elements of
258              the statvfs structure will be filled. The option argument is  an
259              integer optionally followed by a multiplicative suffix (GB, GiB,
260              TB, TiB, PB, PiB, EB, EiB) (no spaces in between). If no  suffix
261              is  supplied, bytes are assumed; eg: 20000000, 30GB, 45TiB. Note
262              that s3fs does not compute the actual volume  size  (too  expen‐
263              sive): by default it will assume the maximum possible size; how‐
264              ever, since this may confuse other software which uses s3fs, the
265              advertised bucket size can be set with this option.
266
267       -o ensure_diskfree (default 0)
268              sets MB to ensure disk free space. This option means the thresh‐
269              old of free space size on disk which is used for the cache  file
270              by s3fs.  s3fs makes file for downloading, uploading and caching
271              files.  If the disk free space is smaller than this value,  s3fs
272              do  not  use  disk space as possible in exchange for the perfor‐
273              mance.
274
275       -o multipart_threshold (default="25")
276              threshold, in MB, to use multipart  upload  instead  of  single-
277              part. Must be at least 5 MB.
278
279       -o singlepart_copy_limit (default="512")
280              maximum  size, in MB, of a single-part copy before trying multi‐
281              part copy.
282
283       -o host (default="https://s3.amazonaws.com")
284              Set a non-Amazon host, e.g., https://example.com.
285
286       -o servicepath (default="/")
287              Set a service path when the non-Amazon host requires a prefix.
288
289       -o url (default="https://s3.amazonaws.com")
290              sets the url to use to access Amazon S3.  If  you  want  to  use
291              HTTP, then you can set "url=http://s3.amazonaws.com".  If you do
292              not use https, please specify the URL with the url option.
293
294       -o endpoint (default="us-east-1")
295              sets the endpoint to use on signature version 4.  If this option
296              is  not  specified, s3fs uses "us-east-1" region as the default.
297              If the s3fs could not connect to the region  specified  by  this
298              option,  s3fs could not run.  But if you do not specify this op‐
299              tion, and if you can not connect with the default  region,  s3fs
300              will  retry  to  automatically  connect to the other region.  So
301              s3fs can know the correct region name, because s3fs can find  it
302              in an error from the S3 server.
303
304       -o sigv2 (default is signature version 4 falling back to version 2)
305              sets signing AWS requests by using only signature version 2.
306
307       -o sigv4 (default is signature version 4 falling back to version 2)
308              sets signing AWS requests by using only signature version 4.
309
310       -o mp_umask (default is "0000")
311              sets umask for the mount point directory.  If allow_other option
312              is not set, s3fs allows access to the mount point  only  to  the
313              owner.   In the opposite case s3fs allows access to all users as
314              the default.  But if you set the allow_other with  this  option,
315              you  can  control the permissions of the mount point by this op‐
316              tion like umask.
317
318       -o umask (default is "0000")
319              sets umask for files under the mountpoint. This can allow  users
320              other  than  the  mounting  user to read and write to files that
321              they did not create.
322
323       -o nomultipart - disable multipart uploads
324
325       -o streamupload (default is disable)
326              Enable stream upload.  If this option is enabled,  a  sequential
327              upload  will  be  performed  in parallel with the write from the
328              part that has been written during a multipart upload.   This  is
329              expected to give better performance than other upload functions.
330              Note that this option is still experimental and  may  change  in
331              the future.
332
333       -o max_thread_count (default is "5")
334              Specifies  the  number  of  threads  waiting for stream uploads.
335              Note that this option and Streamm Upload are still  experimental
336              and subject to change in the future.  This option will be merged
337              with "parallel_count" in the future.
338
339       -o enable_content_md5 (default is disable)
340              Allow S3 server to check data integrity of uploads via the  Con‐
341              tent-MD5 header.  This can add CPU overhead to transfers.
342
343       -o enable_unsigned_payload (default is disable)
344              Do  not  calculate  Content-SHA256  for PutObject and UploadPart
345              payloads. This can reduce CPU overhead to transfers.
346
347       -o ecs (default is disable)
348              This option instructs s3fs to query the ECS container credential
349              metadata address instead of the instance metadata address.
350
351       -o iam_role (default is no IAM role)
352              This option requires the IAM role name or "auto". If you specify
353              "auto", s3fs will automatically use the IAM role names that  are
354              set to an instance. If you specify this option without any argu‐
355              ment, it is the same as that you have specified the "auto".
356
357       -o imdsv1only (default is to use IMDSv2 with fallback to v1)
358              AWS instance metadata service, used with  IAM  role  authentica‐
359              tion,  supports  the use of an API token. If you're using an IAM
360              role in an environment that does  not  support  IMDSv2,  setting
361              this  flag  will  skip retrieval and usage of the API token when
362              retrieving IAM credentials.
363
364       -o ibm_iam_auth (default is not using IBM IAM authentication)
365              This option instructs s3fs to use  IBM  IAM  authentication.  In
366              this  mode,  the  AWSAccessKey  and AWSSecretKey will be used as
367              IBM's Service-Instance-ID and APIKey, respectively.
368
369       -o ibm_iam_endpoint (default is https://iam.cloud.ibm.com)
370              Sets the URL to use for IBM IAM authentication.
371
372       -o credlib (default=
373              Specifies the shared library that handles the  credentials  con‐
374              taining  the authentication token.  If this option is specified,
375              the specified credential and token processing  provided  by  the
376              shared  library  ant  will  be performed instead of the built-in
377              credential processing.  This option  cannot  be  specified  with
378              passwd_file,   profile,  use_session_token,  ecs,  ibm_iam_auth,
379              ibm_iam_endpoint, imdsv1only and iam_role option.
380
381       -o credlib_opts (default=
382              Specifies the options to pass when the shared library  specified
383              in credlib is loaded and then initialized.  For the string spec‐
384              ified in this option, specify the string defined by  the  shared
385              library.
386
387       -o use_xattr (default is not handling the extended attribute)
388              Enable  to  handle  the extended attribute (xattrs).  If you set
389              this option, you can use the extended attribute.   For  example,
390              encfs  and ecryptfs need to support the extended attribute.  No‐
391              tice: if s3fs handles the extended attribute, s3fs can not  work
392              to copy command with preserve=mode.
393
394       -o noxmlns - disable registering xml name space.
395              disable registering xml name space for response of ListBucketRe‐
396              sult and ListVersionsResult etc. Default name space is looked up
397              from   "http://s3.amazonaws.com/doc/2006-03-01".    This  option
398              should not be specified now, because s3fs looks up  xmlns  auto‐
399              matically after v1.66.
400
401       -o nomixupload - disable copy in multipart uploads.
402              Disable  to  use  PUT  (copy api) when multipart uploading large
403              size objects.  By default,  when  doing  multipart  upload,  the
404              range  of unchanged data will use PUT (copy api) whenever possi‐
405              ble.  When nocopyapi or norenameapi is  specified,  use  of  PUT
406              (copy api) is invalidated even if this option is not specified.
407
408       -o nocopyapi - for other incomplete compatibility object storage.
409              For  a  distributed object storage which is compatibility S3 API
410              without PUT (copy api).  If you set this option, s3fs do not use
411              PUT  with "x-amz-copy-source" (copy api). Because traffic is in‐
412              creased 2-3 times by this option, we do not recommend this.
413
414       -o norenameapi - for other incomplete compatibility object storage.
415              For a distributed object storage which is compatibility  S3  API
416              without  PUT  (copy  api).  This option is a subset of nocopyapi
417              option. The nocopyapi option does not use copy-api for all  com‐
418              mand  (ex.  chmod,  chown, touch, mv, etc), but this option does
419              not use copy-api for only rename command (ex. mv).  If this  op‐
420              tion is specified with nocopyapi, then s3fs ignores it.
421
422       -o use_path_request_style (use legacy API calling style)
423              Enable  compatibility with S3-like APIs which do not support the
424              virtual-host request style, by  using  the  older  path  request
425              style.
426
427       -o listobjectsv2 (use ListObjectsV2)
428              Issue  ListObjectsV2  instead  of  ListObjects, useful on object
429              stores without ListObjects support.
430
431       -o noua (suppress User-Agent header)
432              Usually s3fs outputs of the User-Agent in "s3fs/<version>  (com‐
433              mit hash <hash>; <using ssl library name>)" format.  If this op‐
434              tion is specified, s3fs suppresses the output of the User-Agent.
435
436       -o cipher_suites
437              Customize the list of TLS cipher suites. Expects a  colon  sepa‐
438              rated  list  of  cipher suite names.  A list of available cipher
439              suites, depending on your TLS engine, can be found on  the  CURL
440              library     documentation:     https://curl.haxx.se/docs/ssl-ci
441              phers.html
442
443       -o instance_name
444              The instance name of the current  s3fs  mountpoint.   This  name
445              will be added to logging messages and user agent headers sent by
446              s3fs.
447
448       -o complement_stat (complement lack of file/directory mode)
449              s3fs complements lack of information about  file/directory  mode
450              if  a  file  or a directory object does not have x-amz-meta-mode
451              header.  As default, s3fs does not complements stat  information
452              for  a object, then the object will not be able to be allowed to
453              list/modify.
454
455       -o compat_dir (enable support of alternative directory names)
456              s3fs supports two different naming schemas "dir/" and  "dir"  to
457              map  directory names to S3 objects and vice versa by default. As
458              a third variant, directories can  be  determined  indirectly  if
459              there  is a file object with a path (e.g. "/dir/file") but with‐
460              out the parent directory.  This option enables a fourth variant,
461              "dir_$folder$", created by older applications.
462
463              S3fs  uses only the first schema "dir/" to create S3 objects for
464              directories.
465
466              The support for these different naming  schemas  causes  an  in‐
467              creased communication effort.
468
469              If  you do not have access permissions to the bucket and specify
470              a directory path created by a client other  than  s3fs  for  the
471              mount  point, you cannot start because the mount point directory
472              cannot be found by s3fs. But by specifying this option, you  can
473              avoid this error.
474
475       -o use_wtf8 - support arbitrary file system encoding.
476              S3  requires  all  object  names  to  be  valid  UTF-8. But some
477              clients, notably Windows NFS clients, use  their  own  encoding.
478              This  option  re-encodes  invalid  UTF-8 object names into valid
479              UTF-8 by mapping offending codes into a  'private'  codepage  of
480              the  Unicode  set.   Useful  on clients not using UTF-8 as their
481              file system encoding.
482
483       -o use_session_token - indicate that session token should be provided.
484              If credentials are provided by environment variables this switch
485              forces  presence check of AWS_SESSION_TOKEN variable.  Otherwise
486              an error is returned.
487
488       -o requester_pays (default is disable)
489              This option instructs s3fs  to  enable  requests  involving  Re‐
490              quester  Pays  buckets (It includes the 'x-amz-request-payer=re‐
491              quester' entry in the request header).
492
493       -o mime (default is "/etc/mime.types")
494              Specify the path of the mime.types file.  If this option is  not
495              specified,  the  existence  of "/etc/mime.types" is checked, and
496              that file is loaded as mime information.  If this file does  not
497              exist  on  macOS,  then  "/etc/apache2/mime.types" is checked as
498              well.
499
500       -o proxy (default="")
501              This option specifies a proxy to S3 server.  Specify  the  proxy
502              with   '[<scheme://]hostname(fqdn)[:<port>]'  formatted.   Also,
503              ':<port>' can also be omitted. If omitted, port 443 is used  for
504              HTTPS  schema,  and port 1080 is used otherwise.  This option is
505              the  same  as  the  curl  command's  '--proxy(-x)'  option   and
506              libcurl's  'CURLOPT_PROXY'  flag.   This option is equivalent to
507              and   takes   precedence   over   the   environment    variables
508              'http_proxy', 'all_proxy', etc.
509
510       -o proxy_cred_file (default="")
511              This  option  specifies the file that describes the username and
512              passphrase for authentication of the proxy when the HTTP  schema
513              proxy   is  specified  by  the  'proxy'  option.   Username  and
514              passphrase are valid only for HTTP schema.  If  the  HTTP  proxy
515              does  not  require  authentication, this option is not required.
516              Separate the username and passphrase with a  ':'  character  and
517              specify each as a URL-encoded string.
518
519       -o logfile - specify the log output file.
520              s3fs  outputs  the log file to syslog. Alternatively, if s3fs is
521              started with the "-f" option specified, the log will  be  output
522              to  the  stdout/stderr.   You can use this option to specify the
523              log file that s3fs outputs.  If you specify a log file with this
524              option,  it will reopen the log file when s3fs receives a SIGHUP
525              signal. You can use the SIGHUP signal for log rotation.
526
527       -o dbglevel (default="crit")
528              Set the debug message level. set value as crit  (critical),  err
529              (error),  warn (warning), info (information) to debug level. de‐
530              fault debug level is critical.  If s3fs run  with  "-d"  option,
531              the  debug level is set information.  When s3fs catch the signal
532              SIGUSR2, the debug level is bump up.
533
534       -o curldbg - put curl debug message
535              Put the debug message from libcurl when this  option  is  speci‐
536              fied.  Specify "normal" or "body" for the parameter.  If the pa‐
537              rameter is omitted, it is the same as "normal".   If  "body"  is
538              specified,  some  API  communication body data will be output in
539              addition to the debug message output as "normal".
540
541       -o no_time_stamp_msg - no time stamp in debug message
542              The time stamp is output to the debug message  by  default.   If
543              this  option  is specified, the time stamp will not be output in
544              the debug message.  It is the same even if the environment vari‐
545              able "S3FS_MSGTIMESTAMP" is set to "no".
546
547       -o set_check_cache_sigusr1 (default is stdout)
548              If  the  cache  is  enabled,  you can check the integrity of the
549              cache file and the cache file's stats info file.  This option is
550              specified  and  when  sending  the  SIGUSR1  signal  to the s3fs
551              process checks the cache status at that time.  This  option  can
552              take a file path as parameter to output the check result to that
553              file.  The file path parameter can be omitted. If  omitted,  the
554              result will be output to stdout or syslog.
555
556       -o update_parent_dir_stat (default is disable)
557              The  parent  directory's mtime and ctime are updated when a file
558              or directory is created or deleted (when the parent  directory's
559              inode  is updated).  By default, parent directory statistics are
560              not updated.
561
562   utility mode options
563       -u or --incomplete-mpu-list
564              Lists multipart incomplete objects  uploaded  to  the  specified
565              bucket.
566
567       --incomplete-mpu-abort all or date format (default="24H")
568              Delete the multipart incomplete object uploaded to the specified
569              bucket.  If "all" is specified for this  option,  all  multipart
570              incomplete  objects will be deleted.  If you specify no argument
571              as an option, objects older than 24 hours (24H) will be  deleted
572              (This  is  the default value).  You can specify an optional date
573              format.  It can be specified as year, month, day, hour,  minute,
574              second,  and it is expressed as "Y", "M", "D", "h", "m", "s" re‐
575              spectively.  For example, "1Y6M10D12h30m30s".
576

FUSE/MOUNT OPTIONS

578       Most of the generic mount options described in  'man  mount'  are  sup‐
579       ported (ro, rw, suid, nosuid, dev, nodev, exec, noexec, atime, noatime,
580       sync, async, dirsync). Filesystems are mounted with '-onodev,nosuid' by
581       default, which can only be overridden by a privileged user.
582
583       There  are many FUSE specific mount options that can be specified. e.g.
584       allow_other. See the FUSE README for the full set.
585

SERVER URL/REQUEST STYLE

587       Be careful when specifying the server endpoint(URL).
588
589       If your bucket name contains dots("."), you should use the path request
590       style(using "use_path_request_style" option).
591
592       Also, if you are using a server other than Amazon S3, you need to spec‐
593       ify the endpoint with the "url" option. At that time, depending on  the
594       server  you  are  using,  you  may  have  to  specify  the path request
595       style("use_path_request_style" option).
596

LOCAL STORAGE CONSUMPTION

598       s3fs requires local caching for operation. You can enable a local cache
599       with  "-o  use_cache" or s3fs uses temporary files to cache pending re‐
600       quests to s3.
601
602       Apart from the requirements discussed below, it is recommended to  keep
603       enough  cache  resp.  temporary  storage  to allow one copy each of all
604       files open for reading and writing at any one time.
605
606
607          Local cache with "-o use_cache"
608
609       s3fs automatically maintains a local cache of files. The  cache  folder
610       is  specified  by  the  parameter of "-o use_cache". It is only a local
611       cache that can be deleted at any time. s3fs rebuilds it if necessary.
612
613       Whenever s3fs needs to read or write a file on S3, it first creates the
614       file in the cache directory and operates on it.
615
616       The  amount  of  local  cache storage used can be indirectly controlled
617       with "-o ensure_diskfree".
618
619
620          Without local cache
621
622       Since s3fs always requires some storage space for operation, it creates
623       temporary  files to store incoming write requests until the required s3
624       request size is reached and the segment has been uploaded. After  that,
625       this data is truncated in the temporary file to free up storage space.
626
627       Per file you need at least twice the part size (default 5MB or "-o mul‐
628       tipart_size") for writing multipart requests or  space  for  the  whole
629       file if single requests are enabled ("-o nomultipart").
630

PERFORMANCE CONSIDERATIONS

632       This section discusses settings to improve s3fs performance.
633
634       In  most  cases, backend performance cannot be controlled and is there‐
635       fore not part of this discussion.
636
637       Details of the local storage usage is discussed in "LOCAL STORAGE  CON‐
638       SUMPTION".
639
640
641          CPU and Memory Consumption
642
643       s3fs  is a multi-threaded application. Depending on the workload it may
644       use multiple CPUs and a certain amount of memory. You can  monitor  the
645       CPU and memory consumption with the "top" utility.
646
647
648          Performance of S3 requests
649
650       s3fs  provides  several  options  (e.g. "-o multipart_size", "-o paral‐
651       lel_count") to control behaviour and thus indirectly  the  performance.
652       The possible combinations of these options in conjunction with the var‐
653       ious S3 backends are so varied that there is no individual  recommenda‐
654       tion other than the default values. Improved individual settings can be
655       found by testing and measuring.
656
657       The two options "Enable no object cache" ("-o enable_noobj_cache")  and
658       "Disable  support  of  alternative  directory  names"  ("-o notsup_com‐
659       pat_dir") can be used to control shared access to the  same  bucket  by
660       different applications:
661
662
663       •      Enable no object cache ("-o enable_noobj_cache")
664
665              If a bucket is used exclusively by an s3fs instance, you can en‐
666              able the cache for non-existent files and directories  with  "-o
667              enable_noobj_cache".  This eliminates repeated requests to check
668              the existence of an object, saving time and possibly money.
669
670       •      Enable support of alternative directory names ("-o compat_dir")
671
672              s3fs recognizes "dir/" objects  as  directories.  Clients  other
673              than  s3fs may use "dir", "dir_$folder$" objects as directories,
674              or directory objects may not exist. In order for s3fs to  recog‐
675              nize  these as directories, you can specify the "compat_dir" op‐
676              tion.
677
678       •      Completion  of  file  and  directory  information  ("-o  comple‐
679              ment_stat")
680
681              s3fs uses the "x-amz-meta-mode header" to determine if an object
682              is a file or a directory. For this reason, objects that  do  not
683              have  the  "x-amz-meta-mode header" may not produce the expected
684              results(The directory cannot be displayed, etc.). By  specifying
685              the  "complement_stat"  option,  s3fs can automatically complete
686              this missing attribute information, and you can get the expected
687              results.
688

NOTES

690       The  maximum size of objects that s3fs can handle depends on Amazon S3.
691       For example, up to 5 GB when using single PUT API. And up to  5  TB  is
692       supported when Multipart Upload API is used.
693
694       s3fs  leverages  /etc/mime.types  to "guess" the "correct" content-type
695       based on file name extension. This means that you can copy a website to
696       S3 and serve it up directly from S3 with correct content-types!
697

SEE ALSO

699       fuse(8), mount(8), fusermount(1), fstab(5)
700

BUGS

702       Due  to  S3's "eventual consistency" limitations, file creation can and
703       will occasionally fail. Even  after  a  successful  create,  subsequent
704       reads  can  fail for an indeterminate time, even after one or more suc‐
705       cessful reads. Create and read enough files and you will eventually en‐
706       counter  this  failure.  This is not a flaw in s3fs and it is not some‐
707       thing a FUSE wrapper like s3fs can work around. The retries option does
708       not  address  this issue. Your application must either tolerate or com‐
709       pensate for these failures, for example by retrying creates or reads.
710

AUTHOR

712       s3fs has been written by Randy Rizun <rrizun@gmail.com>.
713
714
715
716S3FS                               July 2023                           S3FS(1)
Impressum