1S3FS(1)                          User Commands                         S3FS(1)
2
3
4

NAME

6       S3FS - FUSE-based file system backed by Amazon S3
7

SYNOPSIS

9   mounting
10       s3fs bucket[:/path] mountpoint  [options]
11
12       s3fs mountpoint  [options (must specify bucket= option)]
13
14   unmounting
15       umount mountpoint
16              For root.
17
18       fusermount -u mountpoint
19              For unprivileged user.
20
21   utility mode (remove interrupted multipart uploading objects)
22       s3fs --incomplete-mpu-list (-u) bucket
23
24       s3fs --incomplete-mpu-abort[=all | =<expire date format>] bucket
25

DESCRIPTION

27       s3fs  is a FUSE filesystem that allows you to mount an Amazon S3 bucket
28       as a local filesystem. It stores files natively and transparently in S3
29       (i.e., you can use other programs to access the same files).
30

AUTHENTICATION

32       s3fs  supports the standard AWS credentials file (https://docs.aws.ama
33       zon.com/cli/latest/userguide/cli-config-files.html)      stored      in
34       `${HOME}/.aws/credentials`.   Alternatively,  s3fs  supports  a  custom
35       passwd file. Only AWS credentials file format can be used when AWS ses‐
36       sion  token  is  required.  The s3fs password file has this format (use
37       this format if you have only one set of credentials):
38           accessKeyId:secretAccessKey
39
40       If you have more than one set of credentials, this syntax is also  rec‐
41       ognized:
42           bucketName:accessKeyId:secretAccessKey
43
44       Password files can be stored in two locations:
45            /etc/passwd-s3fs     [0640]
46            $HOME/.passwd-s3fs   [0600]
47
48       s3fs  also  recognizes  the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
49       environment variables.
50

OPTIONS

52   general options
53       -h   --help
54              print help
55
56            --version
57              print version
58
59       -f     FUSE foreground option - do not run as daemon.
60
61       -s     FUSE single-threaded option (disables multi-threaded operation)
62
63   mount options
64       All s3fs options must given in the form where "opt" is:
65               <option_name>=<option_value>
66
67       -o bucket
68              if it is not specified bucket name (and path) in  command  line,
69              must specify this option after -o option for bucket name.
70
71       -o default_acl (default="private")
72              the default canned acl to apply to all written s3 objects, e.g.,
73              "private", "public-read".  see  https://docs.aws.amazon.com/Ama
74              zonS3/latest/dev/acl-overview.html#canned-acl  for the full list
75              of canned ACLs.
76
77       -o retries (default="5")
78              number of times to retry a failed S3 transaction.
79
80       -o tmpdir (default="/tmp")
81              local folder for temporary files.
82
83       -o use_cache (default="" which means disabled)
84              local folder to use for local file cache.
85
86       -o check_cache_dir_exist (default is disable)
87              If use_cache is set, check if the cache  directory  exists.   If
88              this option is not specified, it will be created at runtime when
89              the cache directory does not exist.
90
91       -o del_cache - delete local file cache
92              delete local file cache when s3fs starts and exits.
93
94       -o storage_class (default="standard")
95              store object with specified  storage  class.   Possible  values:
96              standard,  standard_ia, onezone_ia, reduced_redundancy, intelli‐
97              gent_tiering, glacier, and deep_archive.
98
99       -o use_rrs (default is disable)
100              use Amazon's Reduced Redundancy Storage.  this option can not be
101              specified with use_sse.  (can specify use_rrs=1 for old version)
102              this option has been replaced by new storage_class option.
103
104       -o use_sse (default is disable)
105              Specify three type Amazon's Server-Site Encryption: SSE-S3, SSE-
106              C  or  SSE-KMS.  SSE-S3  uses Amazon S3-managed encryption keys,
107              SSE-C uses customer-provided encryption keys, and  SSE-KMS  uses
108              the  master  key  which  you manage in AWS KMS.  You can specify
109              "use_sse" or "use_sse=1" enables SSE-S3 type (use_sse=1  is  old
110              type  parameter).   Case  of  setting  SSE-C,  you  can  specify
111              "use_sse=custom", "use_sse=custom:<custom  key  file  path>"  or
112              "use_sse=<custom  key  file  path>" (only <custom key file path>
113              specified is old type parameter).  You can  use  "c"  for  short
114              "custom".   The custom key file must be 600 permission. The file
115              can have some lines, each line is one SSE-C key.  The first line
116              in file is used as Customer-Provided Encryption Keys for upload‐
117              ing and changing headers etc.  If  there  are  some  keys  after
118              first  line,  those  are  used  downloading object which are en‐
119              crypted by not first key.  So that, you can keep all SSE-C  keys
120              in  file,  that  is  SSE-C key history.  If you specify "custom"
121              ("c")  without  file  path,  you  need  to  set  custom  key  by
122              load_sse_c option or AWSSSECKEYS environment. (AWSSSECKEYS envi‐
123              ronment has some SSE-C keys with ":" separator.)  This option is
124              used  to decide the SSE type.  So that if you do not want to en‐
125              crypt a object at uploading, but you need to  decrypt  encrypted
126              object  at downloading, you can use load_sse_c option instead of
127              this option.  For setting SSE-KMS,  specify  "use_sse=kmsid"  or
128              "use_sse=kmsid:<kms  id>".   You  can use "k" for short "kmsid".
129              If you san specify SSE-KMS type with your <kms id> in  AWS  KMS,
130              you  can  set  it after "kmsid:" (or "k:").  If you specify only
131              "kmsid" ("k"), you need to  set  AWSSSEKMSID  environment  which
132              value  is  <kms id>.  You must be careful about that you can not
133              use the KMS id which is not same EC2 region.
134
135       -o load_sse_c - specify SSE-C keys
136              Specify the custom-provided encryption keys file  path  for  de‐
137              crypting at downloading.  If you use the custom-provided encryp‐
138              tion key at uploading, you specify with  "use_sse=custom".   The
139              file has many lines, one line means one custom key.  So that you
140              can keep all SSE-C keys in file,  that  is  SSE-C  key  history.
141              AWSSSECKEYS environment is as same as this file contents.
142
143       -o passwd_file (default="")
144              specify  the path to the password file, which which takes prece‐
145              dence over the password in $HOME/.passwd-s3fs  and  /etc/passwd-
146              s3fs
147
148       -o ahbe_conf (default="" which means disabled)
149              This  option specifies the configuration file path which file is
150              the additional HTTP header by file (object) extension.
151               The configuration file format is below:
152               -----------
153               line         = [file suffix or regex] HTTP-header [HTTP-values]
154               file suffix  = file (object) suffix, if this field is empty, it
155              means "reg:(.*)".(=all object).
156               regex         =  regular  expression to match the file (object)
157              path. this type starts with "reg:" prefix.
158               HTTP-header  = additional HTTP header name
159               HTTP-values  = additional HTTP header value
160               -----------
161               Sample:
162               -----------
163               .gz                    Content-Encoding  gzip
164               .Z                     Content-Encoding  compress
165               reg:^/MYDIR/(.*)[.]t2$ Content-Encoding  text2
166               -----------
167               A sample configuration file is uploaded  in  "test"  directory.
168              If  you  specify  this  option  for  set "Content-Encoding" HTTP
169              header, please take care for RFC 2616.
170
171       -o profile (default="default")
172              Choose a profile from ${HOME}/.aws/credentials  to  authenticate
173              against  S3.   Note  that this format matches the AWS CLI format
174              and differs from the s3fs passwd format.
175
176       -o public_bucket (default="" which means disabled)
177              anonymously mount a public bucket when set  to  1,  ignores  the
178              $HOME/.passwd-s3fs  and /etc/passwd-s3fs files.  S3 does not al‐
179              low copy object api for anonymous users, then s3fs sets  nocopy‐
180              api  option  automatically when public_bucket=1 option is speci‐
181              fied.
182
183       -o connect_timeout (default="300" seconds)
184              time to wait for connection before giving up.
185
186       -o readwrite_timeout (default="120" seconds)
187              time to wait between read/write activity before giving up.
188
189       -o list_object_max_keys (default="1000")
190              specify the maximum number of keys returned by  S3  list  object
191              API.  The  default  is  1000.  you can set this value to 1000 or
192              more.
193
194       -o max_stat_cache_size (default="100,000" entries (about 40MB))
195              maximum number of entries in the stat cache  and  symbolic  link
196              cache.
197
198       -o stat_cache_expire (default is 900)
199              specify  expire time (seconds) for entries in the stat cache and
200              symbolic link cache. This expire time indicates the  time  since
201              cached.
202
203       -o stat_cache_interval_expire (default is 900)
204              specify  expire time (seconds) for entries in the stat cache and
205              symbolic link cache. This expire time is based on the time  from
206              the  last  access time of those cache.  This option is exclusive
207              with stat_cache_expire, and is left for compatibility with older
208              versions.
209
210       -o enable_noobj_cache (default is disable)
211              enable  cache entries for the object which does not exist.  s3fs
212              always has to check whether file (or sub directory) exists under
213              object (path) when s3fs does some command, since s3fs has recog‐
214              nized a directory which does not exist and has files or sub  di‐
215              rectories  under  itself.   It  increases ListBucket request and
216              makes performance bad.  You can specify this option for  perfor‐
217              mance, s3fs memorizes in stat cache that the object (file or di‐
218              rectory) does not exist.
219
220       -o no_check_certificate (by default this option is disabled)
221              server certificate won't be checked against the  available  cer‐
222              tificate authorities.
223
224       -o ssl_verify_hostname (default="2")
225              When 0, do not verify the SSL certificate against the hostname.
226
227       -o nodnscache - disable DNS cache.
228              s3fs  is always using DNS cache, this option make DNS cache dis‐
229              able.
230
231       -o nosscache - disable SSL session cache.
232              s3fs is always using SSL session cache,  this  option  make  SSL
233              session cache disable.
234
235       -o multireq_max (default="20")
236              maximum number of parallel request for listing objects.
237
238       -o parallel_count (default="5")
239              number  of parallel request for uploading big objects.  s3fs up‐
240              loads large object (over 20MB) by multipart  post  request,  and
241              sends  parallel  requests.   This option limits parallel request
242              count which s3fs requests at once.  It is necessary to set  this
243              value depending on a CPU and a network band.
244
245       -o multipart_size (default="10")
246              part size, in MB, for each multipart request.  The minimum value
247              is 5 MB and the maximum value is 5 GB.
248
249       -o multipart_copy_size (default="512")
250              part size, in MB, for each multipart copy request, used for  re‐
251              names  and mixupload.  The minimum value is 5 MB and the maximum
252              value is 5 GB.  Must be at least 512 MB to copy the maximum 5 TB
253              object size but lower values may improve performance.
254
255       -o max_dirty_data (default="5120")
256              Flush  dirty  data  to  S3 after a certain number of MB written.
257              The minimum value is 50 MB. -1 value means disable.   Cannot  be
258              used with nomixupload.
259
260       -o ensure_diskfree (default 0)
261              sets MB to ensure disk free space. This option means the thresh‐
262              old of free space size on disk which is used for the cache  file
263              by s3fs.  s3fs makes file for downloading, uploading and caching
264              files.  If the disk free space is smaller than this value,  s3fs
265              do  not  use  disk space as possible in exchange for the perfor‐
266              mance.
267
268       -o multipart_threshold (default="25")
269              threshold, in MB, to use multipart  upload  instead  of  single-
270              part. Must be at least 5 MB.
271
272       -o singlepart_copy_limit (default="512")
273              maximum  size, in MB, of a single-part copy before trying multi‐
274              part copy.
275
276       -o host (default="https://s3.amazonaws.com")
277              Set a non-Amazon host, e.g., https://example.com.
278
279       -o servicepath (default="/")
280              Set a service path when the non-Amazon host requires a prefix.
281
282       -o url (default="https://s3.amazonaws.com")
283              sets the url to use to access Amazon S3.  If  you  want  to  use
284              HTTP, then you can set "url=http://s3.amazonaws.com".  If you do
285              not use https, please specify the URL with the url option.
286
287       -o endpoint (default="us-east-1")
288              sets the endpoint to use on signature version 4.  If this option
289              is  not  specified, s3fs uses "us-east-1" region as the default.
290              If the s3fs could not connect to the region  specified  by  this
291              option,  s3fs could not run.  But if you do not specify this op‐
292              tion, and if you can not connect with the default  region,  s3fs
293              will  retry  to  automatically  connect to the other region.  So
294              s3fs can know the correct region name, because s3fs can find  it
295              in an error from the S3 server.
296
297       -o sigv2 (default is signature version 4 falling back to version 2)
298              sets signing AWS requests by using only signature version 2.
299
300       -o sigv4 (default is signature version 4 falling back to version 2)
301              sets signing AWS requests by using only signature version 4.
302
303       -o mp_umask (default is "0000")
304              sets umask for the mount point directory.  If allow_other option
305              is not set, s3fs allows access to the mount point  only  to  the
306              owner.   In the opposite case s3fs allows access to all users as
307              the default.  But if you set the allow_other with  this  option,
308              you  can  control the permissions of the mount point by this op‐
309              tion like umask.
310
311       -o umask (default is "0000")
312              sets umask for files under the mountpoint. This can allow  users
313              other  than  the  mounting  user to read and write to files that
314              they did not create.
315
316       -o nomultipart - disable multipart uploads
317
318       -o enable_content_md5 (default is disable)
319              Allow S3 server to check data integrity of uploads via the  Con‐
320              tent-MD5  header.   This  can add CPU overhead to transfers.  -o
321              enable_unsigned_payload (default is disable)  Do  not  calculate
322              Content-SHA256  for  PutObject and UploadPart payloads. This can
323              reduce CPU overhead to transfers.
324
325       -o ecs (default is disable)
326              This option instructs s3fs to query the ECS container credential
327              metadata address instead of the instance metadata address.
328
329       -o iam_role (default is no IAM role)
330              This option requires the IAM role name or "auto". If you specify
331              "auto", s3fs will automatically use the IAM role names that  are
332              set to an instance. If you specify this option without any argu‐
333              ment, it is the same as that you have specified the "auto".
334
335       -o imdsv1only (default is to use IMDSv2 with fallback to v1)
336              AWS instance metadata service, used with  IAM  role  authentica‐
337              tion,  supports  the use of an API token. If you're using an IAM
338              role in an environment that does  not  support  IMDSv2,  setting
339              this  flag  will  skip retrieval and usage of the API token when
340              retrieving IAM credentials.
341
342       -o ibm_iam_auth (default is not using IBM IAM authentication)
343              This option instructs s3fs to use  IBM  IAM  authentication.  In
344              this  mode,  the  AWSAccessKey  and AWSSecretKey will be used as
345              IBM's Service-Instance-ID and APIKey, respectively.
346
347       -o ibm_iam_endpoint (default is https://iam.cloud.ibm.com)
348              Sets the URL to use for IBM IAM authentication.
349
350       -o use_xattr (default is not handling the extended attribute)
351              Enable to handle the extended attribute (xattrs).   If  you  set
352              this  option,  you can use the extended attribute.  For example,
353              encfs and ecryptfs need to support the extended attribute.   No‐
354              tice:  if s3fs handles the extended attribute, s3fs can not work
355              to copy command with preserve=mode.
356
357       -o noxmlns - disable registering xml name space.
358              disable registering xml name space for response of ListBucketRe‐
359              sult and ListVersionsResult etc. Default name space is looked up
360              from  "http://s3.amazonaws.com/doc/2006-03-01".    This   option
361              should  not  be specified now, because s3fs looks up xmlns auto‐
362              matically after v1.66.
363
364       -o nomixupload - disable copy in multipart uploads.
365              Disable to use PUT (copy api)  when  multipart  uploading  large
366              size  objects.   By  default,  when  doing multipart upload, the
367              range of unchanged data will use PUT (copy api) whenever  possi‐
368              ble.   When  nocopyapi  or  norenameapi is specified, use of PUT
369              (copy api) is invalidated even if this option is not specified.
370
371       -o nocopyapi - for other incomplete compatibility object storage.
372              For a distributed object storage which is compatibility  S3  API
373              without PUT (copy api).  If you set this option, s3fs do not use
374              PUT with "x-amz-copy-source" (copy api). Because traffic is  in‐
375              creased 2-3 times by this option, we do not recommend this.
376
377       -o norenameapi - for other incomplete compatibility object storage.
378              For  a  distributed object storage which is compatibility S3 API
379              without PUT (copy api).  This option is a  subset  of  nocopyapi
380              option.  The nocopyapi option does not use copy-api for all com‐
381              mand (ex. chmod, chown, touch, mv, etc), but  this  option  does
382              not  use copy-api for only rename command (ex. mv).  If this op‐
383              tion is specified with nocopyapi, then s3fs ignores it.
384
385       -o use_path_request_style (use legacy API calling style)
386              Enable compatibility with S3-like APIs which do not support  the
387              virtual-host  request  style,  by  using  the older path request
388              style.
389
390       -o listobjectsv2 (use ListObjectsV2)
391              Issue ListObjectsV2 instead of  ListObjects,  useful  on  object
392              stores without ListObjects support.
393
394       -o noua (suppress User-Agent header)
395              Usually  s3fs outputs of the User-Agent in "s3fs/<version> (com‐
396              mit hash <hash>; <using ssl library name>)" format.  If this op‐
397              tion is specified, s3fs suppresses the output of the User-Agent.
398
399       -o cipher_suites
400              Customize  the  list of TLS cipher suites. Expects a colon sepa‐
401              rated list of cipher suite names.  A list  of  available  cipher
402              suites,  depending  on your TLS engine, can be found on the CURL
403              library     documentation:     https://curl.haxx.se/docs/ssl-ci
404              phers.html
405
406       -o instance_name
407              The  instance  name  of  the current s3fs mountpoint.  This name
408              will be added to logging messages and user agent headers sent by
409              s3fs.
410
411       -o complement_stat (complement lack of file/directory mode)
412              s3fs  complements  lack of information about file/directory mode
413              if a file or a directory object does  not  have  x-amz-meta-mode
414              header.   As default, s3fs does not complements stat information
415              for a object, then the object will not be able to be allowed  to
416              list/modify.
417
418       -o notsup_compat_dir (disable support of alternative directory names)
419              s3fs  supports  the three different naming schemas "dir/", "dir"
420              and "dir_$folder$" to map directory names to S3 objects and vice
421              versa.  As a fourth variant, directories can be determined indi‐
422              rectly if there is a file object with a path (e.g.  "/dir/file")
423              but without the parent directory.
424
425              S3fs  uses only the first schema "dir/" to create S3 objects for
426              directories.
427
428              The support for these different naming  schemas  causes  an  in‐
429              creased communication effort.
430
431              If all applications exclusively use the "dir/" naming scheme and
432              the bucket does not contain any objects with a different  naming
433              scheme,  this option can be used to disable support for alterna‐
434              tive naming schemes. This  reduces  access  time  and  can  save
435              costs.
436
437       -o use_wtf8 - support arbitrary file system encoding.
438              S3  requires  all  object  names  to  be  valid  UTF-8. But some
439              clients, notably Windows NFS clients, use  their  own  encoding.
440              This  option  re-encodes  invalid  UTF-8 object names into valid
441              UTF-8 by mapping offending codes into a  'private'  codepage  of
442              the  Unicode  set.   Useful  on clients not using UTF-8 as their
443              file system encoding.
444
445       -o use_session_token - indicate that session token should be provided.
446              If credentials are provided by environment variables this switch
447              forces  presence check of AWS_SESSION_TOKEN variable.  Otherwise
448              an error is returned.
449
450       -o requester_pays (default is disable)
451              This option instructs s3fs  to  enable  requests  involving  Re‐
452              quester  Pays  buckets (It includes the 'x-amz-request-payer=re‐
453              quester' entry in the request header).
454
455       -o mime (default is "/etc/mime.types")
456              Specify the path of the mime.types file.  If this option is  not
457              specified,  the  existence  of "/etc/mime.types" is checked, and
458              that file is loaded as mime information.  If this file does  not
459              exist  on  macOS,  then  "/etc/apache2/mime.types" is checked as
460              well.
461
462       -o logfile - specify the log output file.
463              s3fs outputs the log file to syslog. Alternatively, if  s3fs  is
464              started  with  the "-f" option specified, the log will be output
465              to the stdout/stderr.  You can use this option  to  specify  the
466              log file that s3fs outputs.  If you specify a log file with this
467              option, it will reopen the log file when s3fs receives a  SIGHUP
468              signal. You can use the SIGHUP signal for log rotation.
469
470       -o dbglevel (default="crit")
471              Set  the  debug message level. set value as crit (critical), err
472              (error), warn (warning), info (information) to debug level.  de‐
473              fault  debug  level  is critical.  If s3fs run with "-d" option,
474              the debug level is set information.  When s3fs catch the  signal
475              SIGUSR2, the debug level is bump up.
476
477       -o curldbg - put curl debug message
478              Put  the  debug  message from libcurl when this option is speci‐
479              fied.  Specify "normal" or "body" for the parameter.  If the pa‐
480              rameter  is  omitted,  it is the same as "normal".  If "body" is
481              specified, some API communication body data will  be  output  in
482              addition to the debug message output as "normal".
483
484       -o no_time_stamp_msg - no time stamp in debug message
485              The  time  stamp  is output to the debug message by default.  If
486              this option is specified, the time stamp will not be  output  in
487              the debug message.  It is the same even if the environment vari‐
488              able "S3FS_MSGTIMESTAMP" is set to "no".
489
490       -o set_check_cache_sigusr1 (default is stdout)
491              If the cache is enabled, you can  check  the  integrity  of  the
492              cache file and the cache file's stats info file.  This option is
493              specified and when  sending  the  SIGUSR1  signal  to  the  s3fs
494              process  checks  the cache status at that time.  This option can
495              take a file path as parameter to output the check result to that
496              file.   The  file path parameter can be omitted. If omitted, the
497              result will be output to stdout or syslog.
498
499   utility mode options
500       -u or --incomplete-mpu-list
501              Lists multipart incomplete objects  uploaded  to  the  specified
502              bucket.
503
504       --incomplete-mpu-abort all or date format (default="24H")
505              Delete the multipart incomplete object uploaded to the specified
506              bucket.  If "all" is specified for this  option,  all  multipart
507              incomplete  objects will be deleted.  If you specify no argument
508              as an option, objects older than 24 hours (24H) will be  deleted
509              (This  is  the default value).  You can specify an optional date
510              format.  It can be specified as year, month, day, hour,  minute,
511              second,  and it is expressed as "Y", "M", "D", "h", "m", "s" re‐
512              spectively.  For example, "1Y6M10D12h30m30s".
513

FUSE/MOUNT OPTIONS

515       Most of the generic mount options described in  'man  mount'  are  sup‐
516       ported (ro, rw, suid, nosuid, dev, nodev, exec, noexec, atime, noatime,
517       sync async, dirsync). Filesystems are mounted with '-onodev,nosuid'  by
518       default, which can only be overridden by a privileged user.
519
520       There  are many FUSE specific mount options that can be specified. e.g.
521       allow_other. See the FUSE README for the full set.
522

LOCAL STORAGE CONSUMPTION

524       s3fs requires local caching for operation. You can enable a local cache
525       with  "-o  use_cache" or s3fs uses temporary files to cache pending re‐
526       quests to s3.
527
528       Apart from the requirements discussed below, it is recommended to  keep
529       enough  cache  resp.  temporary  storage  to allow one copy each of all
530       files open for reading and writing at any one time.
531
532
533          Local cache with "-o use_cache"
534
535       s3fs automatically maintains a local cache of files. The  cache  folder
536       is  specified  by  the  parameter of "-o use_cache". It is only a local
537       cache that can be deleted at any time. s3fs rebuilds it if necessary.
538
539       Whenever s3fs needs to read or write a file on S3, it first creates the
540       file in the cache directory and operates on it.
541
542       The  amount  of  local  cache storage used can be indirectly controlled
543       with "-o ensure_diskfree".
544
545
546          Without local cache
547
548       Since s3fs always requires some storage space for operation, it creates
549       temporary  files to store incoming write requests until the required s3
550       request size is reached and the segment has been uploaded. After  that,
551       this data is truncated in the temporary file to free up storage space.
552
553       Per file you need at least twice the part size (default 5MB or "-o mul‐
554       tipart_size") for writing multipart requests or  space  for  the  whole
555       file if single requests are enabled ("-o nomultipart").
556

PERFORMANCE CONSIDERATIONS

558       This section discusses settings to improve s3fs performance.
559
560       In  most  cases, backend performance cannot be controlled and is there‐
561       fore not part of this discussion.
562
563       Details of the local storage usage is discussed in "LOCAL STORAGE  CON‐
564       SUMPTION".
565
566
567          CPU and Memory Consumption
568
569       s3fs  is a multi-threaded application. Depending on the workload it may
570       use multiple CPUs and a certain amount of memory. You can  monitor  the
571       CPU and memory consumption with the "top" utility.
572
573
574          Performance of S3 requests
575
576       s3fs  provides  several  options  (e.g. "-o multipart_size", "-o paral‐
577       lel_count") to control behaviour and thus indirectly  the  performance.
578       The possible combinations of these options in conjunction with the var‐
579       ious S3 backends are so varied that there is no individual  recommenda‐
580       tion other than the default values. Improved individual settings can be
581       found by testing and measuring.
582
583       The two options "Enable no object cache" ("-o enable_noobj_cache")  and
584       "Disable  support  of  alternative  directory  names"  ("-o notsup_com‐
585       pat_dir") can be used to control shared access to the  same  bucket  by
586       different applications:
587
588
589       •      Enable no object cache ("-o enable_noobj_cache")
590
591              If a bucket is used exclusively by an s3fs instance, you can en‐
592              able the cache for non-existent files and directories  with  "-o
593              enable_noobj_cache".  This eliminates repeated requests to check
594              the existence of an object, saving time and possibly money.
595
596       •      Disable support of alternative directory names ("-o  notsup_com‐
597              pat_dir")
598
599              s3fs  supports "dir/", "dir" and "dir_$folder$" to map directory
600              names to S3 objects and vice versa.
601
602              Some applications use a different naming schema for  associating
603              directory  names  to S3 objects. For example, Apache Hadoop uses
604              the  "dir_$folder$" schema to create S3 objects for directories.
605
606              The option "-o notsup_compat_dir" can be set  if  all  accessing
607              tools use the "dir/" naming schema for directory objects and the
608              bucket does not contain any  objects  with  a  different  naming
609              scheme. In this case, accessing directory objects saves time and
610              possibly money because alternative schemas are not checked.
611

NOTES

613       The maximum size of objects that s3fs can handle depends on Amazon  S3.
614       For  example,  up  to 5 GB when using single PUT API. And up to 5 TB is
615       supported when Multipart Upload API is used.
616
617       s3fs leverages /etc/mime.types to "guess"  the  "correct"  content-type
618       based on file name extension. This means that you can copy a website to
619       S3 and serve it up directly from S3 with correct content-types!
620

SEE ALSO

622       fuse(8), mount(8), fusermount(1), fstab(5)
623

BUGS

625       Due to S3's "eventual consistency" limitations, file creation  can  and
626       will  occasionally  fail.  Even  after  a successful create, subsequent
627       reads can fail for an indeterminate time, even after one or  more  suc‐
628       cessful reads. Create and read enough files and you will eventually en‐
629       counter this failure. This is not a flaw in s3fs and it  is  not  some‐
630       thing a FUSE wrapper like s3fs can work around. The retries option does
631       not address this issue. Your application must either tolerate  or  com‐
632       pensate for these failures, for example by retrying creates or reads.
633

AUTHOR

635       s3fs has been written by Randy Rizun <rrizun@gmail.com>.
636
637
638
639S3FS                              March 2022                           S3FS(1)
Impressum