1S3FS(1)                          User Commands                         S3FS(1)
2
3
4

NAME

6       S3FS - FUSE-based file system backed by Amazon S3
7

SYNOPSIS

9   mounting
10       s3fs bucket[:/path] mountpoint  [options]
11
12       s3fs mountpoint  [options (must specify bucket= option)]
13
14   unmounting
15       umount mountpoint
16              For root.
17
18       fusermount -u mountpoint
19              For unprivileged user.
20
21   utility mode (remove interrupted multipart uploading objects)
22       s3fs --incomplete-mpu-list (-u) bucket
23
24       s3fs --incomplete-mpu-abort[=all | =<expire date format>] bucket
25

DESCRIPTION

27       s3fs  is a FUSE filesystem that allows you to mount an Amazon S3 bucket
28       as a local filesystem. It stores files natively and transparently in S3
29       (i.e., you can use other programs to access the same files).
30

AUTHENTICATION

32       s3fs  supports the standard AWS credentials file (https://docs.aws.ama
33       zon.com/cli/latest/userguide/cli-config-files.html)      stored      in
34       `${HOME}/.aws/credentials`.   Alternatively,  s3fs  supports  a  custom
35       passwd file. Only AWS credentials file format can be used when AWS ses‐
36       sion  token  is  required.  The s3fs password file has this format (use
37       this format if you have only one set of credentials):
38           accessKeyId:secretAccessKey
39
40       If you have more than one set of credentials, this syntax is also  rec‐
41       ognized:
42           bucketName:accessKeyId:secretAccessKey
43
44       Password files can be stored in two locations:
45            /etc/passwd-s3fs     [0640]
46            $HOME/.passwd-s3fs   [0600]
47
48       s3fs also recognizes the AWSACCESSKEYID and AWSSECRETACCESSKEY environ‐
49       ment variables.
50

OPTIONS

52   general options
53       -h   --help
54              print help
55
56            --version
57              print version
58
59       -f     FUSE foreground option - do not run as daemon.
60
61       -s     FUSE singlethreaded option (disables multi-threaded operation)
62
63   mount options
64       All s3fs options must given in the form where "opt" is:
65               <option_name>=<option_value>
66
67       -o bucket
68              if it is not specified bucket name (and path) in  command  line,
69              must specify this option after -o option for bucket name.
70
71       -o default_acl (default="private")
72              the default canned acl to apply to all written s3 objects, e.g.,
73              "private", "public-read".  see  https://docs.aws.amazon.com/Ama
74              zonS3/latest/dev/acl-overview.html#canned-acl  for the full list
75              of canned acls.
76
77       -o retries (default="5")
78              number of times to retry a failed S3 transaction.
79
80       -o use_cache (default="" which means disabled)
81              local folder to use for local file cache.
82
83       -o check_cache_dir_exist (default is disable)
84              If use_cache is set, check if the cache  directory  exists.   If
85              this option is not specified, it will be created at runtime when
86              the cache directory does not exist.
87
88       -o del_cache - delete local file cache
89              delete local file cache when s3fs starts and exits.
90
91       -o storage_class (default="standard")
92              store object with specified storage class.  this option replaces
93              the old option use_rrs.  Possible values: standard, standard_ia,
94              onezone_ia, reduced_redundancy, and intelligent_tiering.
95
96       -o use_rrs (default is disable)
97              use Amazon's Reduced Redundancy Storage.  this option can not be
98              specified with use_sse.  (can specify use_rrs=1 for old version)
99              this option has been replaced by new storage_class option.
100
101       -o use_sse (default is disable)
102              Specify three type Amazon's Server-Site Encryption: SSE-S3, SSE-
103              C  or  SSE-KMS.  SSE-S3  uses Amazon S3-managed encryption keys,
104              SSE-C uses customer-provided encryption keys, and  SSE-KMS  uses
105              the  master  key  which  you manage in AWS KMS.  You can specify
106              "use_sse" or "use_sse=1" enables SSE-S3 type (use_sse=1  is  old
107              type  parameter).   Case  of  setting  SSE-C,  you  can  specify
108              "use_sse=custom", "use_sse=custom:<custom  key  file  path>"  or
109              "use_sse=<custom  key  file  path>" (only <custom key file path>
110              specified is old type parameter).  You can  use  "c"  for  short
111              "custom".   The custom key file must be 600 permission. The file
112              can have some lines, each line is one SSE-C key.  The first line
113              in file is used as Customer-Provided Encryption Keys for upload‐
114              ing and changing headers etc.  If  there  are  some  keys  after
115              first   line,  those  are  used  downloading  object  which  are
116              encrypted by not first key.  So that, you  can  keep  all  SSE-C
117              keys  in  file, that is SSE-C key history.  If you specify "cus‐
118              tom" ("c") without file path, you need  to  set  custom  key  by
119              load_sse_c option or AWSSSECKEYS environment. (AWSSSECKEYS envi‐
120              ronment has some SSE-C keys with ":" separator.)  This option is
121              used  to  decide  the  SSE  type.  So that if you do not want to
122              encrypt a object at uploading, but you need to decrypt encrypted
123              object  at downloading, you can use load_sse_c option instead of
124              this option.  For setting SSE-KMS,  specify  "use_sse=kmsid"  or
125              "use_sse=kmsid:<kms  id>".   You  can use "k" for short "kmsid".
126              If you san specify SSE-KMS type with your <kms id> in  AWS  KMS,
127              you  can  set  it after "kmsid:" (or "k:").  If you specify only
128              "kmsid" ("k"), you need to  set  AWSSSEKMSID  environment  which
129              value  is  <kms id>.  You must be careful about that you can not
130              use the KMS id which is not same EC2 region.
131
132       -o load_sse_c - specify SSE-C keys
133              Specify  the  custom-provided  encryption  keys  file  path  for
134              decrypting  at  downloading.   If  you  use  the custom-provided
135              encryption key at uploading, you specify with  "use_sse=custom".
136              The file has many lines, one line means one custom key.  So that
137              you can keep all SSE-C keys in file, that is SSE-C key  history.
138              AWSSSECKEYS environment is as same as this file contents.
139
140       -o passwd_file (default="")
141              specify  the path to the password file, which which takes prece‐
142              dence over the password in $HOME/.passwd-s3fs  and  /etc/passwd-
143              s3fs
144
145       -o ahbe_conf (default="" which means disabled)
146              This  option specifies the configuration file path which file is
147              the additional HTTP header by file (object) extension.
148               The configuration file format is below:
149               -----------
150               line         = [file suffix or regex] HTTP-header [HTTP-values]
151               file suffix  = file (object) suffix, if this field is empty, it
152              means "reg:(.*)".(=all object).
153               regex         =  regular  expression to match the file (object)
154              path. this type starts with "reg:" prefix.
155               HTTP-header  = additional HTTP header name
156               HTTP-values  = additional HTTP header value
157               -----------
158               Sample:
159               -----------
160               .gz                    Content-Encoding  gzip
161               .Z                     Content-Encoding  compress
162               reg:^/MYDIR/(.*)[.]t2$ Content-Encoding  text2
163               -----------
164               A sample configuration file is uploaded  in  "test"  directory.
165              If  you  specify  this  option  for  set "Content-Encoding" HTTP
166              header, please take care for RFC 2616.
167
168       -o profile (default="default")
169              Choose a profile from ${HOME}/.aws/credentials  to  authenticate
170              against  S3.   Note  that this format matches the AWS CLI format
171              and differs from the s3fs passwd format.
172
173       -o public_bucket (default="" which means disabled)
174              anonymously mount a public bucket when set  to  1,  ignores  the
175              $HOME/.passwd-s3fs  and  /etc/passwd-s3fs  files.   S3  does not
176              allow copy object  api  for  anonymous  users,  then  s3fs  sets
177              nocopyapi  option  automatically  when public_bucket=1 option is
178              specified.
179
180       -o connect_timeout (default="300" seconds)
181              time to wait for connection before giving up.
182
183       -o readwrite_timeout (default="120" seconds)
184              time to wait between read/write activity before giving up.
185
186       -o list_object_max_keys (default="1000")
187              specify the maximum number of keys returned by  S3  list  object
188              API.  The  default  is  1000.  you can set this value to 1000 or
189              more.
190
191       -o max_stat_cache_size (default="100,000" entries (about 40MB))
192              maximum number of entries in the stat cache  and  symbolic  link
193              cache.
194
195       -o stat_cache_expire (default is no expire)
196              specify  expire time (seconds) for entries in the stat cache and
197              symbolic link cache. This expire time indicates the  time  since
198              cached.
199
200       -o stat_cache_interval_expire (default is no expire)
201              specify  expire time (seconds) for entries in the stat cache and
202              symbolic link cache. This expire time is based on the time  from
203              the  last  access time of those cache.  This option is exclusive
204              with stat_cache_expire, and is left for compatibility with older
205              versions.
206
207       -o enable_noobj_cache (default is disable)
208              enable  cache entries for the object which does not exist.  s3fs
209              always has to check whether file (or sub directory) exists under
210              object (path) when s3fs does some command, since s3fs has recog‐
211              nized a directory which does not exist  and  has  files  or  sub
212              directories  under  itself.  It increases ListBucket request and
213              makes performance bad.  You can specify this option for  perfor‐
214              mance,  s3fs  memorizes  in  stat cache that the object (file or
215              directory) does not exist.
216
217       -o no_check_certificate (by default this option is disabled)
218              do not check  ssl  certificate.   server  certificate  won't  be
219              checked against the available certificate authorities.
220
221       -o ssl_verify_hostname (default="2")
222              When 0, do not verify the SSL certificate against the hostname.
223
224       -o nodnscache - disable dns cache.
225              s3fs  is always using dns cache, this option make dns cache dis‐
226              able.
227
228       -o nosscache - disable ssl session cache.
229              s3fs is always using ssl session cache,  this  option  make  ssl
230              session cache disable.
231
232       -o multireq_max (default="20")
233              maximum number of parallel request for listing objects.
234
235       -o parallel_count (default="5")
236              number  of  parallel  request  for  uploading big objects.  s3fs
237              uploads large object (over 20MB) by multipart post request,  and
238              sends  parallel  requests.   This option limits parallel request
239              count which s3fs requests at once.  It is necessary to set  this
240              value depending on a CPU and a network band.
241
242       -o multipart_size (default="10")
243              part size, in MB, for each multipart request.  The minimum value
244              is 5 MB and the maximum value is 5 GB.
245
246       -o ensure_diskfree (default 0)
247              sets MB to ensure disk free space. This option means the thresh‐
248              old  of free space size on disk which is used for the cache file
249              by s3fs.  s3fs makes file for downloading, uploading and caching
250              files.   If the disk free space is smaller than this value, s3fs
251              do not use diskspace as possible in  exchange  for  the  perfor‐
252              mance.
253
254       -o singlepart_copy_limit (default="512")
255              maximum  size, in MB, of a single-part copy before trying multi‐
256              part copy.
257
258       -o host (default="https://s3.amazonaws.com")
259              Set a non-Amazon host, e.g., https://example.com.
260
261       -o sevicepath (default="/")
262              Set a service path when the non-Amazon host requires a prefix.
263
264       -o url (default="https://s3.amazonaws.com")
265              sets the url to use to access Amazon S3.  If  you  want  to  use
266              HTTP, then you can set "url=http://s3.amazonaws.com".  If you do
267              not use https, please specify the URL with the url option.
268
269       -o endpoint (default="us-east-1")
270              sets the endpoint to use on signature version 4.  If this option
271              is  not  specified, s3fs uses "us-east-1" region as the default.
272              If the s3fs could not connect to the region  specified  by  this
273              option,  s3fs  could  not  run.   But if you do not specify this
274              option, and if you can not connect with the default region, s3fs
275              will  retry  to  automatically  connect to the other region.  So
276              s3fs can know the correct region name, because s3fs can find  it
277              in an error from the S3 server.
278
279       -o sigv2 (default is signature version 4)
280              sets signing AWS requests by using Signature Version 2.
281
282       -o mp_umask (default is "0000")
283              sets umask for the mount point directory.  If allow_other option
284              is not set, s3fs allows access to the mount point  only  to  the
285              owner.   In the opposite case s3fs allows access to all users as
286              the default.  But if you set the allow_other with  this  option,
287              you  can  control  the  permissions  of  the mount point by this
288              option like umask.
289
290       -o umask (default is "0000")
291              sets umask for files under the mountpoint.  This can allow users
292              other  than  the  mounting  user to read and write to files that
293              they did not create.
294
295       -o nomultipart - disable multipart uploads
296
297       -o enable_content_md5 (default is disable)
298              Allow S3 server to check data integrity of uploads via the  Con‐
299              tent-MD5 header.  This can add CPU overhead to transfers.
300
301       -o ecs (default is disable)
302              This option instructs s3fs to query the ECS container credential
303              metadata address instead of the instance metadata address.
304
305       -o iam_role (default is no IAM role)
306              This option requires the IAM role name or "auto". If you specify
307              "auto",  s3fs will automatically use the IAM role names that are
308              set to an instance. If you specify this option without any argu‐
309              ment, it is the same as that you have specified the "auto".
310
311       -o ibm_iam_auth (default is not using IBM IAM authentication)
312              This  option  instructs  s3fs  to use IBM IAM authentication. In
313              this mode, the AWSAccessKey and AWSSecretKey  will  be  used  as
314              IBM's Service-Instance-ID and APIKey, respectively.
315
316       -o ibm_iam_endpoint (default is https://iam.bluemix.net)
317              Sets the URL to use for IBM IAM authentication.
318
319       -o use_xattr (default is not handling the extended attribute)
320              Enable  to  handle  the extended attribute (xattrs).  If you set
321              this option, you can use the extended attribute.   For  example,
322              encfs  and  ecryptfs  need  to  support  the extended attribute.
323              Notice: if s3fs handles the extended  attribute,  s3fs  can  not
324              work to copy command with preserve=mode.
325
326       -o noxmlns - disable registering xml name space.
327              disable registering xml name space for response of ListBucketRe‐
328              sult and ListVersionsResult etc. Default name space is looked up
329              from   "http://s3.amazonaws.com/doc/2006-03-01".    This  option
330              should not be specified now, because s3fs looks up  xmlns  auto‐
331              matically after v1.66.
332
333       -o nomixupload - disable copy in multipart uploads.
334              Disable  to  use  PUT  (copy api) when multipart uploading large
335              size objects.  By default,  when  doing  multipart  upload,  the
336              range  of unchanged data will use PUT (copy api) whenever possi‐
337              ble.  When nocopyapi or norenameapi is  specified,  use  of  PUT
338              (copy api) is invalidated even if this option is not specified.
339
340       -o nocopyapi - for other incomplete compatibility object storage.
341              For  a  distributed object storage which is compatibility S3 API
342              without PUT (copy api).  If you set this option, s3fs do not use
343              PUT  with  "x-amz-copy-source"  (copy  api).  Because traffic is
344              increased 2-3 times by this option, we do not recommend this.
345
346       -o norenameapi - for other incomplete compatibility object storage.
347              For a distributed object storage which is compatibility  S3  API
348              without  PUT  (copy  api).  This option is a subset of nocopyapi
349              option. The nocopyapi option does not use copy-api for all  com‐
350              mand  (ex.  chmod,  chown, touch, mv, etc), but this option does
351              not use copy-api for only rename  command  (ex.  mv).   If  this
352              option is specified with nocopyapi, then s3fs ignores it.
353
354       -o use_path_request_style (use legacy API calling style)
355              Enable  compatibility with S3-like APIs which do not support the
356              virtual-host request style, by  using  the  older  path  request
357              style.
358
359       -o noua (suppress User-Agent header)
360              Usually  s3fs outputs of the User-Agent in "s3fs/<version> (com‐
361              mit hash <hash>; <using ssl library  name>)"  format.   If  this
362              option  is  specified,  s3fs  suppresses the output of the User-
363              Agent.
364
365       -o cipher_suites
366              Customize the list of TLS cipher suites. Expects a  colon  sepa‐
367              rated  list  of  cipher suite names.  A list of available cipher
368              suites, depending on your TLS engine, can be found on  the  CURL
369              library       documentation:      https://curl.haxx.se/docs/ssl-
370              ciphers.html
371
372       -o instance_name
373              The instance name of the current  s3fs  mountpoint.   This  name
374              will be added to logging messages and user agent headers sent by
375              s3fs.
376
377       -o complement_stat (complement lack of file/directory mode)
378              s3fs complements lack of information about  file/directory  mode
379              if  a  file  or a directory object does not have x-amz-meta-mode
380              header.  As default, s3fs does not complements stat  information
381              for  a object, then the object will not be able to be allowed to
382              list/modify.
383
384       -o notsup_compat_dir (not support compatibility directory types)
385              As a default, s3fs supports objects of  the  directory  type  as
386              much  as  possible  and recognizes them as directories.  Objects
387              that can be recognized as directory objects are  "dir/",  "dir",
388              "dir_$folder$",  and there is a file object that does not have a
389              directory object but contains that directory path.   s3fs  needs
390              redundant  communication  to  support all these directory types.
391              The object as the directory  created  by  s3fs  is  "dir/".   By
392              restricting s3fs to recognize only "dir/" as a directory, commu‐
393              nication traffic can be reduced.  This option is  used  to  give
394              this  restriction  to  s3fs.   However,  if there is a directory
395              object other than "dir/" in the bucket, specifying  this  option
396              is  not  recommended.   s3fs  may  not  be able to recognize the
397              object correctly if an object created  by  s3fs  exists  in  the
398              bucket.  Please use this option when the directory in the bucket
399              is only "dir/" object.
400
401       -o use_wtf8 - support arbitrary file system encoding.
402              S3 requires all  object  names  to  be  valid  utf-8.  But  some
403              clients,  notably  Windows  NFS clients, use their own encoding.
404              This option re-encodes invalid utf-8  object  names  into  valid
405              utf-8  by  mapping  offending codes into a 'private' codepage of
406              the Unicode set.  Useful on clients not  using  utf-8  as  their
407              file system encoding.
408
409       -o use_session_token - indicate that session token should be provided.
410              If credentials are provided by environment variables this switch
411              forces presence check of AWSSESSIONTOKEN variable.  Otherwise an
412              error is returned.
413
414       -o requester_pays (default is disable)
415              This   option   instructs  s3fs  to  enable  requests  involving
416              Requester  Pays  buckets  (It   includes   the   'x-amz-request-
417              payer=requester' entry in the request header).
418
419       -o dbglevel (default="crit")
420              Set  the  debug message level. set value as crit (critical), err
421              (error), warn (warning),  info  (information)  to  debug  level.
422              default  debug level is critical.  If s3fs run with "-d" option,
423              the debug level is set information.  When s3fs catch the  signal
424              SIGUSR2, the debug level is bumpup.
425
426       -o curldbg - put curl debug message
427              Put  the  debug  message from libcurl when this option is speci‐
428              fied.
429
430   utility mode options
431       -u or --incomplete-mpu-list
432              Lists multipart incomplete objects  uploaded  to  the  specified
433              bucket.
434
435       --incomplete-mpu-abort all or date format (default="24H")
436              Delete the multipart incomplete object uploaded to the specified
437              bucket.  If "all" is specified for this  option,  all  multipart
438              incomplete  objects will be deleted.  If you specify no argument
439              as an option, objects older than 24 hours (24H) will be  deleted
440              (This  is  the default value).  You can specify an optional date
441              format.  It can be specified as year, month, day, hour,  minute,
442              second,  and  it  is  expressed  as "Y", "M", "D", "h", "m", "s"
443              respectively.  For example, "1Y6M10D12h30m30s".
444

FUSE/MOUNT OPTIONS

446       Most of the generic mount options described in  'man  mount'  are  sup‐
447       ported (ro, rw, suid, nosuid, dev, nodev, exec, noexec, atime, noatime,
448       sync async, dirsync).  Filesystems are mounted with '-onodev,nosuid' by
449       default, which can only be overridden by a privileged user.
450
451       There  are many FUSE specific mount options that can be specified. e.g.
452       allow_other. See the FUSE README for the full set.
453

NOTES

455       The maximum size of objects that s3fs can handle depends on Amazon  S3.
456       For  example,  up  to 5 GB when using single PUT API. And up to 5 TB is
457       supported when Multipart Upload API is used.
458
459       If enabled via the "use_cache" option, s3fs automatically  maintains  a
460       local  cache  of  files  in the folder specified by use_cache. Whenever
461       s3fs needs to read or write a file on S3, it first downloads the entire
462       file  locally  to the folder specified by use_cache and operates on it.
463       When fuse_release() is called, s3fs will re-upload the file to S3 if it
464       has  been  changed.  s3fs uses md5 checksums to minimize downloads from
465       S3.
466
467       The folder specified by use_cache is just a  local  cache.  It  can  be
468       deleted at any time. s3fs rebuilds it on demand.
469
470       Local  file  caching  works  by calculating and comparing md5 checksums
471       (ETag HTTP header).
472
473       s3fs leverages /etc/mime.types to "guess"  the  "correct"  content-type
474       based on file name extension. This means that you can copy a website to
475       S3 and serve it up directly from S3 with correct content-types!
476

SEE ALSO

478       fuse(8), mount(8), fusermount(1), fstab(5)
479

BUGS

481       Due to S3's "eventual consistency" limitations, file creation  can  and
482       will  occasionally  fail.  Even  after  a successful create, subsequent
483       reads can fail for an indeterminate time, even after one or  more  suc‐
484       cessful  reads.  Create  and  read enough files and you will eventually
485       encounter this failure. This is not a flaw in s3fs and it is not  some‐
486       thing a FUSE wrapper like s3fs can work around. The retries option does
487       not address this issue. Your application must either tolerate  or  com‐
488       pensate for these failures, for example by retrying creates or reads.
489

AUTHOR

491       s3fs has been written by Randy Rizun <rrizun@gmail.com>.
492
493
494
495S3FS                             February 2011                         S3FS(1)
Impressum