1NFS(5)                        File Formats Manual                       NFS(5)
2
3
4

NAME

6       nfs - fstab format and options for the nfs file systems
7

SYNOPSIS

9       /etc/fstab
10

DESCRIPTION

12       NFS  is  an  Internet  Standard protocol created by Sun Microsystems in
13       1984. NFS was developed to allow file sharing between systems  residing
14       on  a local area network.  Depending on kernel configuration, the Linux
15       NFS client may support NFS versions 3, 4.0, 4.1, or 4.2.
16
17       The mount(8) command attaches a file system to the system's name  space
18       hierarchy  at  a  given mount point.  The /etc/fstab file describes how
19       mount(8) should assemble a system's file name  hierarchy  from  various
20       independent  file  systems  (including  file  systems  exported  by NFS
21       servers).  Each line in the /etc/fstab file  describes  a  single  file
22       system,  its  mount  point, and a set of default mount options for that
23       mount point.
24
25       For NFS file system mounts, a line in the /etc/fstab file specifies the
26       server  name,  the path name of the exported server directory to mount,
27       the local directory that is the mount point, the type  of  file  system
28       that is being mounted, and a list of mount options that control the way
29       the filesystem is mounted and how the NFS client behaves when accessing
30       files on this mount point.  The fifth and sixth fields on each line are
31       not used by NFS, thus conventionally each contain the digit  zero.  For
32       example:
33
34               server:path   /mountpoint   fstype   option,option,...   0 0
35
36       The  server's  hostname  and  export pathname are separated by a colon,
37       while the mount options are separated by commas. The  remaining  fields
38       are separated by blanks or tabs.
39
40       The server's hostname can be an unqualified hostname, a fully qualified
41       domain name, a dotted quad IPv4 address, or an IPv6 address enclosed in
42       square  brackets.  Link-local and site-local IPv6 addresses must be ac‐
43       companied by an interface identifier.  See ipv6(7) for details on spec‐
44       ifying raw IPv6 addresses.
45
46       The  fstype  field  contains  "nfs".   Use  of  the  "nfs4"  fstype  in
47       /etc/fstab is deprecated.
48

MOUNT OPTIONS

50       Refer to mount(8) for a description of generic mount options  available
51       for  all file systems. If you do not need to specify any mount options,
52       use the generic option defaults in /etc/fstab.
53
54   Options supported by all versions
55       These options are valid to use with any NFS version.
56
57       nfsvers=n      The NFS protocol version  number  used  to  contact  the
58                      server's  NFS  service.   If the server does not support
59                      the requested version, the mount request fails.  If this
60                      option  is  not  specified, the client tries version 4.2
61                      first, then negotiates down until  it  finds  a  version
62                      supported by the server.
63
64       vers=n         This option is an alternative to the nfsvers option.  It
65                      is included for compatibility with other operating  sys‐
66                      tems
67
68       soft / softerr / hard
69                      Determines the recovery behavior of the NFS client after
70                      an NFS request times out.  If no option is specified (or
71                      if  the  hard option is specified), NFS requests are re‐
72                      tried indefinitely.  If either the soft or  softerr  op‐
73                      tion  is specified, then the NFS client fails an NFS re‐
74                      quest after  retrans  retransmissions  have  been  sent,
75                      causing  the  NFS  client to return either the error EIO
76                      (for the soft option) or ETIMEDOUT (for the softerr  op‐
77                      tion) to the calling application.
78
79                      NB:  A  so-called  "soft"  timeout can cause silent data
80                      corruption in certain cases. As such, use  the  soft  or
81                      softerr  option  only when client responsiveness is more
82                      important than data integrity.  Using NFS  over  TCP  or
83                      increasing  the value of the retrans option may mitigate
84                      some of the risks of using the soft or softerr option.
85
86       softreval / nosoftreval
87                      In cases where the NFS server is down, it may be  useful
88                      to  allow  the  NFS client to continue to serve up paths
89                      and attributes from  cache  after  retrans  attempts  to
90                      revalidate that cache have timed out.  This may, for in‐
91                      stance, be helpful when trying to unmount  a  filesystem
92                      tree from a server that is permanently down.
93
94                      It  is possible to combine softreval with the soft mount
95                      option, in which case operations that cannot  be  served
96                      up  from  cache  will time out and return an error after
97                      retrans attempts. The combination with the default  hard
98                      mount option implies those uncached operations will con‐
99                      tinue to retry until a response  is  received  from  the
100                      server.
101
102                      Note: the default mount option is nosoftreval which dis‐
103                      allows fallback to cache when  revalidation  fails,  and
104                      instead  follows  the  behavior  dictated by the hard or
105                      soft mount option.
106
107       intr / nointr  This option is provided for backward compatibility.   It
108                      is ignored after kernel 2.6.25.
109
110       timeo=n        The  time  in  deciseconds  (tenths of a second) the NFS
111                      client waits for a response before it retries an NFS re‐
112                      quest.
113
114                      For NFS over TCP the default timeo value is 600 (60 sec‐
115                      onds).  The NFS client performs  linear  backoff:  After
116                      each retransmission the timeout is increased by timeo up
117                      to the maximum of 600 seconds.
118
119                      However, for NFS over UDP, the client uses  an  adaptive
120                      algorithm  to  estimate an appropriate timeout value for
121                      frequently used request types (such as  READ  and  WRITE
122                      requests),  but  uses the timeo setting for infrequently
123                      used request types (such as FSINFO  requests).   If  the
124                      timeo option is not specified, infrequently used request
125                      types are retried after 1.1  seconds.   After  each  re‐
126                      transmission,  the  NFS  client  doubles the timeout for
127                      that request, up to a maximum timeout length of 60  sec‐
128                      onds.
129
130       retrans=n      The number of times the NFS client retries a request be‐
131                      fore it attempts further recovery action. If the retrans
132                      option  is  not specified, the NFS client tries each UDP
133                      request three times and each TCP request twice.
134
135                      The NFS client generates a "server not responding"  mes‐
136                      sage after retrans retries, then attempts further recov‐
137                      ery (depending on whether the hard mount  option  is  in
138                      effect).
139
140       rsize=n        The maximum number of bytes in each network READ request
141                      that the NFS client can receive when reading data from a
142                      file  on an NFS server.  The actual data payload size of
143                      each NFS READ request is equal to or  smaller  than  the
144                      rsize setting. The largest read payload supported by the
145                      Linux NFS client is 1,048,576 bytes (one megabyte).
146
147                      The rsize value is a positive integral multiple of 1024.
148                      Specified rsize values lower than 1024 are replaced with
149                      4096; values  larger  than  1048576  are  replaced  with
150                      1048576.  If  a  specified value is within the supported
151                      range but not a multiple of 1024, it is rounded down  to
152                      the nearest multiple of 1024.
153
154                      If  an rsize value is not specified, or if the specified
155                      rsize value is  larger  than  the  maximum  that  either
156                      client  or server can support, the client and server ne‐
157                      gotiate the largest rsize value that they can both  sup‐
158                      port.
159
160                      The rsize mount option as specified on the mount(8) com‐
161                      mand line appears in the /etc/mtab  file.  However,  the
162                      effective  rsize  value  negotiated  by  the  client and
163                      server is reported in the /proc/mounts file.
164
165       wsize=n        The maximum number of bytes per  network  WRITE  request
166                      that the NFS client can send when writing data to a file
167                      on an NFS server. The actual data payload size  of  each
168                      NFS  WRITE request is equal to or smaller than the wsize
169                      setting. The largest  write  payload  supported  by  the
170                      Linux NFS client is 1,048,576 bytes (one megabyte).
171
172                      Similar  to  rsize , the wsize value is a positive inte‐
173                      gral multiple of 1024.   Specified  wsize  values  lower
174                      than  1024  are  replaced  with 4096; values larger than
175                      1048576 are replaced with 1048576. If a specified  value
176                      is  within  the  supported  range  but not a multiple of
177                      1024, it is rounded down  to  the  nearest  multiple  of
178                      1024.
179
180                      If  a  wsize value is not specified, or if the specified
181                      wsize value is  larger  than  the  maximum  that  either
182                      client  or server can support, the client and server ne‐
183                      gotiate the largest wsize value that they can both  sup‐
184                      port.
185
186                      The wsize mount option as specified on the mount(8) com‐
187                      mand line appears in the /etc/mtab  file.  However,  the
188                      effective  wsize  value  negotiated  by  the  client and
189                      server is reported in the /proc/mounts file.
190
191       ac / noac      Selects whether the client may cache file attributes. If
192                      neither option is specified (or if ac is specified), the
193                      client caches file attributes.
194
195                      To improve  performance,  NFS  clients  cache  file  at‐
196                      tributes.  Every  few  seconds, an NFS client checks the
197                      server's version of each file's attributes for  updates.
198                      Changes  that  occur on the server in those small inter‐
199                      vals remain  undetected  until  the  client  checks  the
200                      server  again.  The  noac  option  prevents clients from
201                      caching file attributes so that  applications  can  more
202                      quickly detect file changes on the server.
203
204                      In  addition  to preventing the client from caching file
205                      attributes, the noac option forces application writes to
206                      become  synchronous  so that local changes to a file be‐
207                      come visible on the server immediately.  That way, other
208                      clients can quickly detect recent writes when they check
209                      the file's attributes.
210
211                      Using the noac option provides greater  cache  coherence
212                      among  NFS  clients accessing the same files, but it ex‐
213                      tracts a significant performance penalty.  As such,  ju‐
214                      dicious  use of file locking is encouraged instead.  The
215                      DATA AND METADATA COHERENCE section contains a  detailed
216                      discussion of these trade-offs.
217
218       acregmin=n     The minimum time (in seconds) that the NFS client caches
219                      attributes of a regular file before  it  requests  fresh
220                      attribute  information from a server.  If this option is
221                      not specified, the NFS client uses a  3-second  minimum.
222                      See  the  DATA AND METADATA COHERENCE section for a full
223                      discussion of attribute caching.
224
225       acregmax=n     The maximum time (in seconds) that the NFS client caches
226                      attributes  of  a  regular file before it requests fresh
227                      attribute information from a server.  If this option  is
228                      not  specified, the NFS client uses a 60-second maximum.
229                      See the DATA AND METADATA COHERENCE section for  a  full
230                      discussion of attribute caching.
231
232       acdirmin=n     The minimum time (in seconds) that the NFS client caches
233                      attributes of a directory before it requests  fresh  at‐
234                      tribute  information  from  a server.  If this option is
235                      not specified, the NFS client uses a 30-second  minimum.
236                      See  the  DATA AND METADATA COHERENCE section for a full
237                      discussion of attribute caching.
238
239       acdirmax=n     The maximum time (in seconds) that the NFS client caches
240                      attributes  of  a directory before it requests fresh at‐
241                      tribute information from a server.  If  this  option  is
242                      not  specified, the NFS client uses a 60-second maximum.
243                      See the DATA AND METADATA COHERENCE section for  a  full
244                      discussion of attribute caching.
245
246       actimeo=n      Using  actimeo sets all of acregmin, acregmax, acdirmin,
247                      and acdirmax to the same value.  If this option  is  not
248                      specified,  the NFS client uses the defaults for each of
249                      these options listed above.
250
251       bg / fg        Determines how the mount(8) command behaves  if  an  at‐
252                      tempt  to  mount  an export fails.  The fg option causes
253                      mount(8) to exit with an error status if any part of the
254                      mount  request  times  out  or  fails outright.  This is
255                      called a "foreground" mount, and is the default behavior
256                      if neither the fg nor bg mount option is specified.
257
258                      If  the  bg  option  is  specified, a timeout or failure
259                      causes the mount(8) command to fork a child  which  con‐
260                      tinues to attempt to mount the export.  The parent imme‐
261                      diately returns with a zero exit code.  This is known as
262                      a "background" mount.
263
264                      If  the  local  mount  point  directory  is missing, the
265                      mount(8) command acts as if the mount request timed out.
266                      This  permits  nested NFS mounts specified in /etc/fstab
267                      to proceed in any order  during  system  initialization,
268                      even  if some NFS servers are not yet available.  Alter‐
269                      natively these issues can be addressed  using  an  auto‐
270                      mounter (refer to automount(8) for details).
271
272       nconnect=n     When  using  a connection oriented protocol such as TCP,
273                      it may sometimes be advantageous to set up multiple con‐
274                      nections between the client and server. For instance, if
275                      your clients and/or servers are equipped  with  multiple
276                      network  interface  cards (NICs), using multiple connec‐
277                      tions to spread the load  may  improve  overall  perfor‐
278                      mance.   In  such  cases, the nconnect option allows the
279                      user to specify the number of connections that should be
280                      established  between the client and server up to a limit
281                      of 16.
282
283                      Note that the nconnect option may also be used  by  some
284                      pNFS drivers to decide how many connections to set up to
285                      the data servers.
286
287       rdirplus / nordirplus
288                      Selects whether to use NFS  v3  or  v4  READDIRPLUS  re‐
289                      quests.  If this option is not specified, the NFS client
290                      uses READDIRPLUS requests on NFS v3 or v4 mounts to read
291                      small  directories.  Some applications perform better if
292                      the client uses only READDIR requests for  all  directo‐
293                      ries.
294
295       retry=n        The  number of minutes that the mount(8) command retries
296                      an NFS mount operation in the foreground  or  background
297                      before  giving up.  If this option is not specified, the
298                      default value for foreground mounts is  2  minutes,  and
299                      the default value for background mounts is 10000 minutes
300                      (80 minutes shy of one week).  If a  value  of  zero  is
301                      specified,  the mount(8) command exits immediately after
302                      the first failure.
303
304                      Note that this only affects how many  retries  are  made
305                      and  doesn't affect the delay caused by each retry.  For
306                      UDP each retry takes the time determined  by  the  timeo
307                      and  retrans  options,  which by default will be about 7
308                      seconds.  For TCP the default is 3 minutes,  but  system
309                      TCP connection timeouts will sometimes limit the timeout
310                      of each retransmission to around 2 minutes.
311
312       sec=flavors    A colon-separated list of one or more  security  flavors
313                      to use for accessing files on the mounted export. If the
314                      server does not support any of these flavors, the  mount
315                      operation  fails.   If sec= is not specified, the client
316                      attempts to find a security flavor that both the  client
317                      and  the  server supports.  Valid flavors are none, sys,
318                      krb5, krb5i, and krb5p.  Refer to the SECURITY CONSIDER‐
319                      ATIONS section for details.
320
321       sharecache / nosharecache
322                      Determines  how  the  client's  data cache and attribute
323                      cache are shared when mounting the same export more than
324                      once  concurrently.  Using the same cache reduces memory
325                      requirements on the client and presents  identical  file
326                      contents  to  applications  when the same remote file is
327                      accessed via different mount points.
328
329                      If neither option is specified, or if the sharecache op‐
330                      tion  is  specified, then a single cache is used for all
331                      mount points  that  access  the  same  export.   If  the
332                      nosharecache  option is specified, then that mount point
333                      gets a unique cache.  Note that when data and  attribute
334                      caches  are  shared,  the  mount  options from the first
335                      mount point take effect for subsequent concurrent mounts
336                      of the same export.
337
338                      As  of kernel 2.6.18, the behavior specified by noshare‐
339                      cache is legacy caching behavior. This is  considered  a
340                      data  risk since multiple cached copies of the same file
341                      on the same client can become out of  sync  following  a
342                      local update of one of the copies.
343
344       resvport / noresvport
345                      Specifies whether the NFS client should use a privileged
346                      source port when communicating with an  NFS  server  for
347                      this  mount  point.  If this option is not specified, or
348                      the resvport option is specified, the NFS client uses  a
349                      privileged  source  port.   If  the noresvport option is
350                      specified, the NFS client uses a  non-privileged  source
351                      port.   This  option  is supported in kernels 2.6.28 and
352                      later.
353
354                      Using non-privileged source  ports  helps  increase  the
355                      maximum  number of NFS mount points allowed on a client,
356                      but NFS servers must be configured to allow  clients  to
357                      connect via non-privileged source ports.
358
359                      Refer  to the SECURITY CONSIDERATIONS section for impor‐
360                      tant details.
361
362       lookupcache=mode
363                      Specifies how the kernel manages its cache of  directory
364                      entries  for  a  given  mount point.  mode can be one of
365                      all, none, pos, or positive.  This option  is  supported
366                      in kernels 2.6.28 and later.
367
368                      The Linux NFS client caches the result of all NFS LOOKUP
369                      requests.  If the requested directory  entry  exists  on
370                      the  server,  the result is referred to as positive.  If
371                      the requested directory entry  does  not  exist  on  the
372                      server, the result is referred to as negative.
373
374                      If this option is not specified, or if all is specified,
375                      the client assumes both types of directory cache entries
376                      are  valid  until  their  parent  directory's cached at‐
377                      tributes expire.
378
379                      If pos or positive is specified, the client assumes pos‐
380                      itive  entries  are valid until their parent directory's
381                      cached attributes expire, but always  revalidates  nega‐
382                      tive entires before an application can use them.
383
384                      If  none is specified, the client revalidates both types
385                      of directory cache entries before an application can use
386                      them.   This  permits quick detection of files that were
387                      created or removed by other clients, but can impact  ap‐
388                      plication and server performance.
389
390                      The  DATA  AND METADATA COHERENCE section contains a de‐
391                      tailed discussion of these trade-offs.
392
393       fsc / nofsc    Enable/Disables the cache of (read-only) data  pages  to
394                      the   local   disk  using  the  FS-Cache  facility.  See
395                      cachefilesd(8)      and       <kernel_source>/Documenta‐
396                      tion/filesystems/caching  for detail on how to configure
397                      the FS-Cache facility.  Default value is nofsc.
398
399       sloppy         The  sloppy  option  is  an  alternative  to  specifying
400                      mount.nfs -s option.
401
402       xprtsec=policy Specifies the use of transport layer security to protect
403                      NFS network traffic on behalf of this mount point.  pol‐
404                      icy can be one of none, tls, or mtls.
405
406                      If none is specified, transport layer security is forced
407                      off, even if the NFS server supports transport layer se‐
408                      curity.
409
410                      If  tls  is  specified,  the client uses RPC-with-TLS to
411                      provide in-transit confidentiality.
412
413                      If mtls is specified, the client  uses  RPC-with-TLS  to
414                      authenticate  itself and to provide in-transit confiden‐
415                      tiality.
416
417                      If either tls or mtls is specified and the  server  does
418                      not  support  RPC-with-TLS or peer authentication fails,
419                      the mount attempt fails.
420
421                      If the xprtsec= option is not specified, the default be‐
422                      havior  depends  on  the  kernel version, but is usually
423                      equivalent to xprtsec=none.
424
425   Options for NFS versions 2 and 3 only
426       Use these options, along with the options in the above subsection,  for
427       NFS versions 2 and 3 only.
428
429       proto=netid    The  netid determines the transport that is used to com‐
430                      municate with the NFS  server.   Available  options  are
431                      udp,  udp6, tcp, tcp6, rdma, and rdma6.  Those which end
432                      in 6 use IPv6 addresses and are only available  if  sup‐
433                      port for TI-RPC is built in. Others use IPv4 addresses.
434
435                      Each  transport  protocol uses different default retrans
436                      and timeo settings.  Refer to the description  of  these
437                      two mount options for details.
438
439                      In  addition to controlling how the NFS client transmits
440                      requests to the server, this mount option also  controls
441                      how  the mount(8) command communicates with the server's
442                      rpcbind and mountd services.  Specifying  a  netid  that
443                      uses  TCP  forces  all traffic from the mount(8) command
444                      and the NFS client to use TCP.  Specifying a netid  that
445                      uses UDP forces all traffic types to use UDP.
446
447                      Before  using NFS over UDP, refer to the TRANSPORT METH‐
448                      ODS section.
449
450                      If the proto mount option is not specified, the mount(8)
451                      command  discovers  which  protocols the server supports
452                      and chooses an appropriate transport for  each  service.
453                      Refer to the TRANSPORT METHODS section for more details.
454
455       udp            The   udp   option   is  an  alternative  to  specifying
456                      proto=udp.  It is included for compatibility with  other
457                      operating systems.
458
459                      Before  using NFS over UDP, refer to the TRANSPORT METH‐
460                      ODS section.
461
462       tcp            The  tcp  option  is  an   alternative   to   specifying
463                      proto=tcp.   It is included for compatibility with other
464                      operating systems.
465
466       rdma           The  rdma  option  is  an  alternative   to   specifying
467                      proto=rdma.
468
469       port=n         The  numeric value of the server's NFS service port.  If
470                      the server's NFS service is not available on the  speci‐
471                      fied port, the mount request fails.
472
473                      If  this  option  is  not specified, or if the specified
474                      port value is 0, then the NFS client uses the  NFS  ser‐
475                      vice port number advertised by the server's rpcbind ser‐
476                      vice.  The mount request fails if the  server's  rpcbind
477                      service  is  not  available, the server's NFS service is
478                      not registered with its rpcbind service, or the server's
479                      NFS service is not available on the advertised port.
480
481       mountport=n    The  numeric  value of the server's mountd port.  If the
482                      server's mountd service is not available on  the  speci‐
483                      fied port, the mount request fails.
484
485                      If  this  option  is  not specified, or if the specified
486                      port value is 0, then  the  mount(8)  command  uses  the
487                      mountd  service  port  number advertised by the server's
488                      rpcbind  service.   The  mount  request  fails  if   the
489                      server's  rpcbind service is not available, the server's
490                      mountd service is not registered with its  rpcbind  ser‐
491                      vice, or the server's mountd service is not available on
492                      the advertised port.
493
494                      This option can be used  when  mounting  an  NFS  server
495                      through a firewall that blocks the rpcbind protocol.
496
497       mountproto=netid
498                      The  transport  the NFS client uses to transmit requests
499                      to the NFS server's mountd service when performing  this
500                      mount  request,  and  when  later  unmounting this mount
501                      point.
502
503                      netid may be one of udp, and tcp which use IPv4  address
504                      or, if TI-RPC is built into the mount.nfs command, udp6,
505                      and tcp6 which use IPv6 addresses.
506
507                      This option can be used  when  mounting  an  NFS  server
508                      through  a  firewall that blocks a particular transport.
509                      When used in combination with the proto option,  differ‐
510                      ent  transports for mountd requests and NFS requests can
511                      be specified.  If the server's  mountd  service  is  not
512                      available via the specified transport, the mount request
513                      fails.
514
515                      Refer to the TRANSPORT METHODS section for more  on  how
516                      the  mountproto  mount  option  interacts with the proto
517                      mount option.
518
519       mounthost=name The hostname of the host running mountd.  If this option
520                      is  not specified, the mount(8) command assumes that the
521                      mountd service runs on the same host as the NFS service.
522
523       mountvers=n    The RPC version number  used  to  contact  the  server's
524                      mountd.   If  this  option  is not specified, the client
525                      uses a version number appropriate to the  requested  NFS
526                      version.   This  option is useful when multiple NFS ser‐
527                      vices are running on the same remote server host.
528
529       namlen=n       The maximum length  of  a  pathname  component  on  this
530                      mount.   If  this  option  is not specified, the maximum
531                      length is negotiated with the  server.  In  most  cases,
532                      this maximum length is 255 characters.
533
534                      Some early versions of NFS did not support this negotia‐
535                      tion.  Using this option ensures  that  pathconf(3)  re‐
536                      ports  the  proper  maximum component length to applica‐
537                      tions in such cases.
538
539       lock / nolock  Selects whether to use the NLM sideband protocol to lock
540                      files on the server.  If neither option is specified (or
541                      if lock is specified), NLM  locking  is  used  for  this
542                      mount point.  When using the nolock option, applications
543                      can lock files, but such locks  provide  exclusion  only
544                      against  other  applications running on the same client.
545                      Remote applications are not affected by these locks.
546
547                      NLM locking must be disabled with the nolock option when
548                      using NFS to mount /var because /var contains files used
549                      by the NLM implementation on Linux.   Using  the  nolock
550                      option  is  also  required  when mounting exports on NFS
551                      servers that do not support the NLM protocol.
552
553       cto / nocto    Selects whether to use close-to-open cache coherence se‐
554                      mantics.   If  neither option is specified (or if cto is
555                      specified), the client uses close-to-open  cache  coher‐
556                      ence  semantics.  If  the nocto option is specified, the
557                      client uses a non-standard heuristic to  determine  when
558                      files on the server have changed.
559
560                      Using the nocto option may improve performance for read-
561                      only mounts, but should be used only if the data on  the
562                      server changes only occasionally.  The DATA AND METADATA
563                      COHERENCE section discusses the behavior of this  option
564                      in more detail.
565
566       acl / noacl    Selects  whether  to use the NFSACL sideband protocol on
567                      this mount point.  The NFSACL  sideband  protocol  is  a
568                      proprietary protocol implemented in Solaris that manages
569                      Access Control Lists. NFSACL was never made  a  standard
570                      part of the NFS protocol specification.
571
572                      If  neither  acl  nor noacl option is specified, the NFS
573                      client negotiates with the server to see if  the  NFSACL
574                      protocol  is  supported,  and uses it if the server sup‐
575                      ports it.  Disabling the NFSACL sideband protocol may be
576                      necessary  if  the  negotiation  causes  problems on the
577                      client or server.  Refer to the SECURITY  CONSIDERATIONS
578                      section for more details.
579
580       local_lock=mechanism
581                      Specifies  whether  to use local locking for any or both
582                      of the flock and the POSIX locking  mechanisms.   mecha‐
583                      nism can be one of all, flock, posix, or none.  This op‐
584                      tion is supported in kernels 2.6.37 and later.
585
586                      The Linux NFS client provides a way to make locks local.
587                      This  means,  the  applications can lock files, but such
588                      locks provide exclusion only against other  applications
589                      running  on the same client. Remote applications are not
590                      affected by these locks.
591
592                      If this option is not specified, or if  none  is  speci‐
593                      fied, the client assumes that the locks are not local.
594
595                      If  all is specified, the client assumes that both flock
596                      and POSIX locks are local.
597
598                      If flock is specified,  the  client  assumes  that  only
599                      flock  locks are local and uses NLM sideband protocol to
600                      lock files when POSIX locks are used.
601
602                      If posix is specified, the  client  assumes  that  POSIX
603                      locks  are  local and uses NLM sideband protocol to lock
604                      files when flock locks are used.
605
606                      To support legacy flock behavior similar to that of  NFS
607                      clients < 2.6.12, use 'local_lock=flock'. This option is
608                      required when exporting NFS mounts via  Samba  as  Samba
609                      maps  Windows  share  mode  locks  as  flock.  Since NFS
610                      clients > 2.6.12  implement  flock  by  emulating  POSIX
611                      locks, this will result in conflicting locks.
612
613                      NOTE:  When used together, the 'local_lock' mount option
614                      will be overridden by 'nolock'/'lock' mount option.
615
616   Options for NFS version 4 only
617       Use these options, along with  the  options  in  the  first  subsection
618       above, for NFS version 4.0 and newer.
619
620       proto=netid    The  netid determines the transport that is used to com‐
621                      municate with the NFS  server.   Supported  options  are
622                      tcp, tcp6, rdma, and rdma6.  tcp6 use IPv6 addresses and
623                      is only available if support for  TI-RPC  is  built  in.
624                      Both others use IPv4 addresses.
625
626                      All  NFS  version 4 servers are required to support TCP,
627                      so if this mount option is not specified, the  NFS  ver‐
628                      sion  4  client  uses  the  TCP  protocol.  Refer to the
629                      TRANSPORT METHODS section for more details.
630
631       minorversion=n Specifies the protocol minor version number.  NFSv4  in‐
632                      troduces "minor versioning," where NFS protocol enhance‐
633                      ments can be introduced without bumping the NFS protocol
634                      version number.  Before kernel 2.6.38, the minor version
635                      is always zero, and this option is not recognized.   Af‐
636                      ter  this  kernel, specifying "minorversion=1" enables a
637                      number of advanced features, such as NFSv4 sessions.
638
639                      Recent kernels allow the minor version to  be  specified
640                      using   the   vers=  option.   For  example,  specifying
641                      vers=4.1 is  the  same  as  specifying  vers=4,minorver‐
642                      sion=1.
643
644       port=n         The  numeric value of the server's NFS service port.  If
645                      the server's NFS service is not available on the  speci‐
646                      fied port, the mount request fails.
647
648                      If  this  mount  option is not specified, the NFS client
649                      uses the standard NFS port number of 2049 without  first
650                      checking  the  server's rpcbind service.  This allows an
651                      NFS version 4 client to contact an NFS version 4  server
652                      through a firewall that may block rpcbind requests.
653
654                      If  the  specified  port value is 0, then the NFS client
655                      uses the NFS  service  port  number  advertised  by  the
656                      server's  rpcbind  service.   The mount request fails if
657                      the server's  rpcbind  service  is  not  available,  the
658                      server's  NFS service is not registered with its rpcbind
659                      service, or the server's NFS service is not available on
660                      the advertised port.
661
662       cto / nocto    Selects whether to use close-to-open cache coherence se‐
663                      mantics for NFS directories on  this  mount  point.   If
664                      neither  cto  nor  nocto is specified, the default is to
665                      use close-to-open cache coherence semantics for directo‐
666                      ries.
667
668                      File  data  caching behavior is not affected by this op‐
669                      tion.  The DATA AND METADATA COHERENCE section discusses
670                      the behavior of this option in more detail.
671
672       clientaddr=n.n.n.n
673
674       clientaddr=n:n:...:n
675                      Specifies  a  single IPv4 address (in dotted-quad form),
676                      or a non-link-local IPv6 address, that  the  NFS  client
677                      advertises  to  allow servers to perform NFS version 4.0
678                      callback requests against files on this mount point.  If
679                      the   server is unable to establish callback connections
680                      to clients, performance  may  degrade,  or  accesses  to
681                      files  may  temporarily  hang.   Can  specify a value of
682                      IPv4_ANY (0.0.0.0) or equivalent IPv6 any address  which
683                      will  signal to the NFS server that this NFS client does
684                      not want delegations.
685
686                      If this option is not specified,  the  mount(8)  command
687                      attempts to discover an appropriate callback address au‐
688                      tomatically.  The automatic  discovery  process  is  not
689                      perfect,  however.   In  the presence of multiple client
690                      network interfaces, special routing policies, or  atypi‐
691                      cal  network  topologies,  the  exact address to use for
692                      callbacks may be nontrivial to determine.
693
694                      NFS protocol versions 4.1 and 4.2 use the  client-estab‐
695                      lished  TCP  connection for callback requests, so do not
696                      require the server to connect to the client.   This  op‐
697                      tion is therefore only affect NFS version 4.0 mounts.
698
699       migration / nomigration
700                      Selects whether the client uses an identification string
701                      that is compatible with NFSv4 Transparent  State  Migra‐
702                      tion (TSM).  If the mounted server supports NFSv4 migra‐
703                      tion with TSM, specify the migration option.
704
705                      Some server features misbehave in the face of  a  migra‐
706                      tion-compatible  identification string.  The nomigration
707                      option retains the use of a traditional client  indenti‐
708                      fication  string  which  is  compatible  with legacy NFS
709                      servers.  This is also the behavior if neither option is
710                      specified.  A client's open and lock state cannot be mi‐
711                      grated transparently when it  identifies  itself  via  a
712                      traditional identification string.
713
714                      This  mount  option  has no effect with NFSv4 minor ver‐
715                      sions newer than zero, which always  use  TSM-compatible
716                      client identification strings.
717
718       max_connect=n  While nconnect option sets a limit on the number of con‐
719                      nections that can be established to a given  server  IP,
720                      max_connect  option  allows  the user to specify maximum
721                      number of connections to different server IPs  that  be‐
722                      long to the same NFSv4.1+ server (session trunkable con‐
723                      nections) up to a limit of  16.  When  client  discovers
724                      that  it  established a client ID to an already existing
725                      server, instead of dropping the  newly  created  network
726                      transport,  the  client  will add this new connection to
727                      the list of available transports for that RPC client.
728
729       trunkdiscovery / notrunkdiscovery
730                      When the client discovers a new filesystem on a NFSv4.1+
731                      server, the trunkdiscovery mount option will cause it to
732                      send a GETATTR for the fs_locations  attribute.   If  is
733                      receives  a  non-zero  length  reply,  it  will  iterate
734                      through the response, and for each  server  location  it
735                      will  establish  a  connection, send an EXCHANGE_ID, and
736                      test for session trunking.  If the  trunking  test  suc‐
737                      ceeds,  the connection will be added to the existing set
738                      of transports for the server, subject to the limit spec‐
739                      ified   by  the  max_connect  option.   The  default  is
740                      notrunkdiscovery.
741

nfs4 FILE SYSTEM TYPE

743       The nfs4 file system type is an old syntax for specifying NFSv4  usage.
744       It  can  still  be used with all NFSv4-specific and common options, ex‐
745       cepted the nfsvers mount option.
746

MOUNT CONFIGURATION FILE

748       If the mount command is configured to do so, all of the  mount  options
749       described  in  the  previous  section  can  also  be  configured in the
750       /etc/nfsmount.conf file. See nfsmount.conf(5) for details.
751

EXAMPLES

753       To mount using NFS version 3, use the nfs file system type and  specify
754       the  nfsvers=3  mount option.  To mount using NFS version 4, use either
755       the nfs file system type, with the nfsvers=4 mount option, or the  nfs4
756       file system type.
757
758       The  following example from an /etc/fstab file causes the mount command
759       to negotiate reasonable defaults for NFS behavior.
760
761               server:/export  /mnt  nfs   defaults                      0 0
762
763       This example shows how to mount using NFS version 4 over TCP with  Ker‐
764       beros 5 mutual authentication.
765
766               server:/export  /mnt  nfs4  sec=krb5                      0 0
767
768       This  example shows how to mount using NFS version 4 over TCP with Ker‐
769       beros 5 privacy or data integrity mode.
770
771               server:/export  /mnt  nfs4  sec=krb5p:krb5i               0 0
772
773       This example can be used to mount /usr over NFS.
774
775               server:/export  /usr  nfs   ro,nolock,nocto,actimeo=3600  0 0
776
777       This example shows how to mount an NFS server using a raw IPv6 link-lo‐
778       cal address.
779
780               [fe80::215:c5ff:fb3e:e2b1%eth0]:/export /mnt nfs defaults 0 0
781

TRANSPORT METHODS

783       NFS clients send requests to NFS servers via Remote Procedure Calls, or
784       RPCs.  The RPC client discovers remote service endpoints automatically,
785       handles per-request authentication, adjusts request parameters for dif‐
786       ferent byte endianness on client and server, and  retransmits  requests
787       that  may  have  been  lost by the network or server.  RPC requests and
788       replies flow over a network transport.
789
790       In most cases, the mount(8) command, NFS client, and NFS server can au‐
791       tomatically  negotiate proper transport and data transfer size settings
792       for a mount point.  In some cases, however, it pays  to  specify  these
793       settings explicitly using mount options.
794
795       Traditionally,  NFS  clients  used  the  UDP  transport exclusively for
796       transmitting requests to servers.  Though its implementation is simple,
797       NFS  over  UDP  has  many limitations that prevent smooth operation and
798       good performance in some common deployment environments.  Even  an  in‐
799       significant packet loss rate results in the loss of whole NFS requests;
800       as such, retransmit timeouts are usually in the subsecond range to  al‐
801       low  clients to recover quickly from dropped requests, but this can re‐
802       sult in extraneous network traffic and server load.
803
804       However, UDP can be quite effective in specialized settings  where  the
805       networks MTU is large relative to NFSs data transfer size (such as net‐
806       work environments that enable jumbo Ethernet frames).  In such environ‐
807       ments,  trimming  the rsize and wsize settings so that each NFS read or
808       write request fits in just a few network frames (or even in   a  single
809       frame)  is  advised.   This  reduces the probability that the loss of a
810       single MTU-sized network frame results in the loss of an  entire  large
811       read or write request.
812
813       TCP is the default transport protocol used for all modern NFS implemen‐
814       tations.  It performs well in almost every conceivable network environ‐
815       ment  and  provides excellent guarantees against data corruption caused
816       by network unreliability.  TCP is often a requirement  for  mounting  a
817       server through a network firewall.
818
819       Under  normal circumstances, networks drop packets much more frequently
820       than NFS servers drop requests.   As  such,  an  aggressive  retransmit
821       timeout   setting for NFS over TCP is unnecessary. Typical timeout set‐
822       tings for NFS over TCP are between one and  ten  minutes.   After   the
823       client  exhausts  its  retransmits  (the value of the retrans mount op‐
824       tion), it assumes a network partition has occurred, and attempts to re‐
825       connect to the server on a fresh socket. Since TCP itself makes network
826       data transfer reliable, rsize and wsize can safely be  allowed  to  de‐
827       fault  to the largest values supported by both client and server, inde‐
828       pendent of the network's MTU size.
829
830   Using the mountproto mount option
831       This section applies only to NFS version 3 mounts since NFS  version  4
832       does not use a separate protocol for mount requests.
833
834       The  Linux  NFS  client can use a different transport for contacting an
835       NFS server's rpcbind service, its mountd service, its Network Lock Man‐
836       ager (NLM) service, and its NFS service.  The exact transports employed
837       by the Linux NFS client for each mount point depends on the settings of
838       the  transport mount options, which include proto, mountproto, udp, and
839       tcp.
840
841       The client sends Network Status Manager (NSM) notifications via UDP  no
842       matter what transport options are specified, but listens for server NSM
843       notifications on both UDP and TCP.  The NFS Access  Control  List  (NF‐
844       SACL) protocol shares the same transport as the main NFS service.
845
846       If no transport options are specified, the Linux NFS client uses UDP to
847       contact the server's mountd service, and TCP to contact its NLM and NFS
848       services by default.
849
850       If the server does not support these transports for these services, the
851       mount(8) command attempts to discover what  the  server  supports,  and
852       then  retries  the  mount request once using the discovered transports.
853       If the server does not advertise any transport supported by the  client
854       or  is  misconfigured, the mount request fails.  If the bg option is in
855       effect, the mount command backgrounds itself and continues  to  attempt
856       the specified mount request.
857
858       When  the  proto option, the udp option, or the tcp option is specified
859       but the mountproto option is not, the specified transport  is  used  to
860       contact  both  the server's mountd service and for the NLM and NFS ser‐
861       vices.
862
863       If the mountproto option is specified but none of the proto, udp or tcp
864       options  are  specified,  then  the specified transport is used for the
865       initial mountd request, but the mount command attempts to discover what
866       the server supports for the NFS protocol, preferring TCP if both trans‐
867       ports are supported.
868
869       If both the mountproto and proto (or udp or tcp) options are specified,
870       then  the  transport specified by the mountproto option is used for the
871       initial mountd request, and the transport specified by the proto option
872       (or the udp or tcp options) is used for NFS, no matter what order these
873       options appear.  No automatic service discovery is performed  if  these
874       options are specified.
875
876       If any of the proto, udp, tcp, or mountproto options are specified more
877       than once on the same mount command line, then the value of the  right‐
878       most instance of each of these options takes effect.
879
880   Using NFS over UDP on high-speed links
881       Using NFS over UDP on high-speed links such as Gigabit can cause silent
882       data corruption.
883
884       The problem can be triggered at high loads, and is caused  by  problems
885       in  IP  fragment reassembly. NFS read and writes typically transmit UDP
886       packets of 4 Kilobytes or more, which have to be broken up into several
887       fragments  in  order  to  be  sent over the Ethernet link, which limits
888       packets to 1500 bytes by default. This process happens at the  IP  net‐
889       work layer and is called fragmentation.
890
891       In order to identify fragments that belong together, IP assigns a 16bit
892       IP ID value to each packet;  fragments  generated  from  the  same  UDP
893       packet  will  have  the  same  IP ID. The receiving system will collect
894       these fragments and combine them to form the original UDP packet.  This
895       process is called reassembly. The default timeout for packet reassembly
896       is 30 seconds; if the network stack does not receive all fragments of a
897       given  packet  within this interval, it assumes the missing fragment(s)
898       got lost and discards those it already received.
899
900       The problem this creates over high-speed links is that it  is  possible
901       to  send more than 65536 packets within 30 seconds. In fact, with heavy
902       NFS traffic one can observe that the IP IDs repeat after about  5  sec‐
903       onds.
904
905       This  has serious effects on reassembly: if one fragment gets lost, an‐
906       other fragment from a different packet but with the same IP ID will ar‐
907       rive  within  the 30 second timeout, and the network stack will combine
908       these fragments to form a new packet. Most of the time, network  layers
909       above  IP  will detect this mismatched reassembly - in the case of UDP,
910       the UDP checksum, which is a 16 bit checksum  over  the  entire  packet
911       payload, will usually not match, and UDP will discard the bad packet.
912
913       However,  the UDP checksum is 16 bit only, so there is a chance of 1 in
914       65536 that it will match even if the packet payload is completely  ran‐
915       dom (which very often isn't the case). If that is the case, silent data
916       corruption will occur.
917
918       This potential should be taken seriously, at least on Gigabit Ethernet.
919       Network  speeds of 100Mbit/s should be considered less problematic, be‐
920       cause with most traffic patterns IP  ID  wrap  around  will  take  much
921       longer than 30 seconds.
922
923       It  is  therefore strongly recommended to use NFS over TCP where possi‐
924       ble, since TCP does not perform fragmentation.
925
926       If you absolutely have to use NFS over UDP over Gigabit Ethernet,  some
927       steps  can  be taken to mitigate the problem and reduce the probability
928       of corruption:
929
930       Jumbo frames:  Many Gigabit network cards are capable  of  transmitting
931                      frames  bigger  than  the 1500 byte limit of traditional
932                      Ethernet, typically 9000 bytes. Using  jumbo  frames  of
933                      9000  bytes will allow you to run NFS over UDP at a page
934                      size of 8K without fragmentation.  Of  course,  this  is
935                      only  feasible  if  all  involved stations support jumbo
936                      frames.
937
938                      To enable a machine to send jumbo frames on  cards  that
939                      support  it, it is sufficient to configure the interface
940                      for a MTU value of 9000.
941
942       Lower reassembly timeout:
943                      By lowering this timeout below the time it takes the  IP
944                      ID counter to wrap around, incorrect reassembly of frag‐
945                      ments can be prevented as well. To do so,  simply  write
946                      the   new   timeout  value  (in  seconds)  to  the  file
947                      /proc/sys/net/ipv4/ipfrag_time.
948
949                      A value of 2 seconds will greatly reduce the probability
950                      of  IPID  clashes  on a single Gigabit link, while still
951                      allowing for a reasonable timeout when  receiving  frag‐
952                      mented traffic from distant peers.
953

DATA AND METADATA COHERENCE

955       Some  modern cluster file systems provide perfect cache coherence among
956       their clients.  Perfect cache coherence among disparate NFS clients  is
957       expensive  to  achieve, especially on wide area networks.  As such, NFS
958       settles for weaker cache coherence that satisfies the  requirements  of
959       most file sharing types.
960
961   Close-to-open cache consistency
962       Typically  file sharing is completely sequential.  First client A opens
963       a file, writes something to it, then closes it.  Then  client  B  opens
964       the same file, and reads the changes.
965
966       When an application opens a file stored on an NFS version 3 server, the
967       NFS client checks that the file exists on the server and  is  permitted
968       to  the  opener by sending a GETATTR or ACCESS request.  The NFS client
969       sends these requests regardless of the freshness of the  file's  cached
970       attributes.
971
972       When  the  application  closes the file, the NFS client writes back any
973       pending changes to the file so  that  the  next  opener  can  view  the
974       changes.  This also gives the NFS client an opportunity to report write
975       errors to the application via the return code from close(2).
976
977       The behavior of checking at open time and flushing at close time is re‐
978       ferred  to  as close-to-open cache consistency, or CTO.  It can be dis‐
979       abled for an entire mount point using the nocto mount option.
980
981   Weak cache consistency
982       There are still opportunities for a  client's  data  cache  to  contain
983       stale  data.  The NFS version 3 protocol introduced "weak cache consis‐
984       tency" (also known as WCC) which provides a way of efficiently checking
985       a  file's  attributes before and after a single request.  This allows a
986       client to help identify changes that could  have  been  made  by  other
987       clients.
988
989       When  a client is using many concurrent operations that update the same
990       file at the same time (for example, during asynchronous write  behind),
991       it  is  still difficult to tell whether it was that client's updates or
992       some other client's updates that altered the file.
993
994   Attribute caching
995       Use the noac mount option to achieve attribute  cache  coherence  among
996       multiple  clients.   Almost every file system operation checks file at‐
997       tribute information.  The client keeps this information  cached  for  a
998       period  of time to reduce network and server load.  When noac is in ef‐
999       fect, a client's file attribute cache is disabled,  so  each  operation
1000       that  needs  to  check  a file's attributes is forced to go back to the
1001       server.  This permits a client to see changes to a file  very  quickly,
1002       at the cost of many extra network operations.
1003
1004       Be  careful not to confuse the noac option with "no data caching."  The
1005       noac mount option prevents the client from caching file  metadata,  but
1006       there are still races that may result in data cache incoherence between
1007       client and server.
1008
1009       The NFS protocol is not designed to support true  cluster  file  system
1010       cache coherence without some type of application serialization.  If ab‐
1011       solute cache coherence among clients is required,  applications  should
1012       use file locking. Alternatively, applications can also open their files
1013       with the O_DIRECT flag to disable data caching entirely.
1014
1015   File timestamp maintenance
1016       NFS servers are responsible for managing file and directory  timestamps
1017       (atime,  ctime,  and  mtime).  When a file is accessed or updated on an
1018       NFS server, the file's timestamps are updated just like they  would  be
1019       on a filesystem local to an application.
1020
1021       NFS  clients  cache  file  attributes,  including timestamps.  A file's
1022       timestamps are updated on NFS clients when its attributes are retrieved
1023       from the NFS server.  Thus there may be some delay before timestamp up‐
1024       dates on an NFS server appear to applications on NFS clients.
1025
1026       To comply with the POSIX filesystem standard, the Linux NFS client  re‐
1027       lies  on  NFS servers to keep a file's mtime and ctime timestamps prop‐
1028       erly up to date.  It does this by flushing local data  changes  to  the
1029       server  before reporting mtime to applications via system calls such as
1030       stat(2).
1031
1032       The Linux client handles atime  updates  more  loosely,  however.   NFS
1033       clients  maintain good performance by caching data, but that means that
1034       application reads, which normally update atime, are  not  reflected  to
1035       the server where a file's atime is actually maintained.
1036
1037       Because of this caching behavior, the Linux NFS client does not support
1038       generic atime-related mount options.  See mount(8) for details on these
1039       options.
1040
1041       In particular, the atime/noatime, diratime/nodiratime, relatime/norela‐
1042       time, and strictatime/nostrictatime mount options have no effect on NFS
1043       mounts.
1044
1045       /proc/mounts  may  report  that the relatime mount option is set on NFS
1046       mounts, but in fact the atime semantics are always as  described  here,
1047       and are not like relatime semantics.
1048
1049   Directory entry caching
1050       The  Linux NFS client caches the result of all NFS LOOKUP requests.  If
1051       the requested directory entry exists on the server, the result  is  re‐
1052       ferred  to as a positive lookup result.  If the requested directory en‐
1053       try does not exist on the server (that is, the server returned ENOENT),
1054       the result is referred to as negative lookup result.
1055
1056       To  detect  when  directory  entries  have been added or removed on the
1057       server, the Linux NFS client  watches  a  directory's  mtime.   If  the
1058       client  detects  a  change in a directory's mtime, the client drops all
1059       cached LOOKUP results for that directory.  Since the directory's  mtime
1060       is a cached attribute, it may take some time before a client notices it
1061       has changed.  See the descriptions of the acdirmin, acdirmax, and  noac
1062       mount  options  for more information about how long a directory's mtime
1063       is cached.
1064
1065       Caching directory entries improves the performance of applications that
1066       do  not  share  files with applications on other clients.  Using cached
1067       information about directories can interfere with applications that  run
1068       concurrently on multiple clients and need to detect the creation or re‐
1069       moval of files quickly, however.  The lookupcache mount  option  allows
1070       some tuning of directory entry caching behavior.
1071
1072       Before  kernel  release 2.6.28, the Linux NFS client tracked only posi‐
1073       tive lookup results.  This permitted applications to detect new  direc‐
1074       tory  entries  created  by  other clients quickly while still providing
1075       some of the performance benefits of caching.  If an application depends
1076       on  the  previous  lookup caching behavior of the Linux NFS client, you
1077       can use lookupcache=positive.
1078
1079       If the client ignores its cache and validates every application  lookup
1080       request  with the server, that client can immediately detect when a new
1081       directory entry has been either created or removed by  another  client.
1082       You  can  specify  this behavior using lookupcache=none.  The extra NFS
1083       requests needed if the client does not cache directory entries can  ex‐
1084       act  a  performance penalty.  Disabling lookup caching should result in
1085       less of a performance penalty than using noac, and has no effect on how
1086       the NFS client caches the attributes of files.
1087
1088   The sync mount option
1089       The NFS client treats the sync mount option differently than some other
1090       file systems (refer to mount(8) for a description of the  generic  sync
1091       and  async  mount options).  If neither sync nor async is specified (or
1092       if the async option is specified), the NFS client delays sending appli‐
1093       cation writes to the server until any of these events occur:
1094
1095              Memory pressure forces reclamation of system memory resources.
1096
1097              An  application  flushes  file  data  explicitly  with  sync(2),
1098              msync(2), or fsync(3).
1099
1100              An application closes a file with close(2).
1101
1102              The file is locked/unlocked via fcntl(2).
1103
1104       In other words, under normal circumstances, data written by an applica‐
1105       tion may not immediately appear on the server that hosts the file.
1106
1107       If  the sync option is specified on a mount point, any system call that
1108       writes data to files on that mount point causes that data to be flushed
1109       to  the  server  before  the system call returns control to user space.
1110       This provides greater data cache coherence among clients, but at a sig‐
1111       nificant performance cost.
1112
1113       Applications  can  use the O_SYNC open flag to force application writes
1114       to individual files to go to the server immediately without the use  of
1115       the sync mount option.
1116
1117   Using file locks with NFS
1118       The  Network Lock Manager protocol is a separate sideband protocol used
1119       to manage file locks in NFS version 3.  To support lock recovery  after
1120       a  client  or server reboot, a second sideband protocol -- known as the
1121       Network Status Manager protocol -- is also required.  In NFS version 4,
1122       file  locking  is  supported directly in the main NFS protocol, and the
1123       NLM and NSM sideband protocols are not used.
1124
1125       In most cases, NLM and NSM services are started automatically,  and  no
1126       extra configuration is required.  Configure all NFS clients with fully-
1127       qualified domain names to ensure that NFS servers can find  clients  to
1128       notify them of server reboots.
1129
1130       NLM supports advisory file locks only.  To lock NFS files, use fcntl(2)
1131       with the F_GETLK and F_SETLK commands.  The NFS  client  converts  file
1132       locks obtained via flock(2) to advisory locks.
1133
1134       When  mounting  servers  that  do not support the NLM protocol, or when
1135       mounting an NFS server through a firewall that blocks the  NLM  service
1136       port,  specify  the  nolock  mount option. NLM locking must be disabled
1137       with the nolock option when using NFS to mount /var because  /var  con‐
1138       tains files used by the NLM implementation on Linux.
1139
1140       Specifying the nolock option may also be advised to improve the perfor‐
1141       mance of a proprietary application which runs on a  single  client  and
1142       uses file locks extensively.
1143
1144   NFS version 4 caching features
1145       The data and metadata caching behavior of NFS version 4 clients is sim‐
1146       ilar to that of earlier versions.  However, NFS version 4 adds two fea‐
1147       tures  that  improve cache behavior: change attributes and file delega‐
1148       tion.
1149
1150       The change attribute is a new part of NFS file and  directory  metadata
1151       which  tracks  data changes.  It replaces the use of a file's modifica‐
1152       tion and change time stamps as a way for clients to validate  the  con‐
1153       tent  of  their  caches.  Change attributes are independent of the time
1154       stamp resolution on either the server or client, however.
1155
1156       A file delegation is a contract between an NFS  version  4  client  and
1157       server  that  allows  the  client  to treat a file temporarily as if no
1158       other client is accessing it.  The server promises to notify the client
1159       (via  a  callback  request)  if  another client attempts to access that
1160       file.  Once a file has been delegated to a client, the client can cache
1161       that  file's  data  and  metadata  aggressively  without contacting the
1162       server.
1163
1164       File delegations come in two flavors: read and write.  A  read  delega‐
1165       tion  means that the server notifies the client about any other clients
1166       that want to write to the file.  A  write  delegation  means  that  the
1167       client gets notified about either read or write accessors.
1168
1169       Servers  grant  file  delegations when a file is opened, and can recall
1170       delegations at any time when another client wants access  to  the  file
1171       that  conflicts  with  any delegations already granted.  Delegations on
1172       directories are not supported.
1173
1174       In order to support delegation callback, the server checks the  network
1175       return  path to the client during the client's initial contact with the
1176       server.  If contact with the client cannot be established,  the  server
1177       simply does not grant any delegations to that client.
1178

SECURITY CONSIDERATIONS

1180       NFS  servers  control access to file data, but they depend on their RPC
1181       implementation to provide authentication of NFS requests.   Traditional
1182       NFS access control mimics the standard mode bit access control provided
1183       in local file systems.  Traditional RPC authentication uses a number to
1184       represent each user (usually the user's own uid), a number to represent
1185       the user's group (the user's gid), and a set  of  up  to  16  auxiliary
1186       group numbers to represent other groups of which the user may be a mem‐
1187       ber.
1188
1189       Typically, file data and user ID values appear  unencrypted  (i.e.  "in
1190       the  clear")  on the network.  Moreover, NFS versions 2 and 3 use sepa‐
1191       rate sideband protocols for mounting, locking and unlocking files,  and
1192       reporting system status of clients and servers.  These auxiliary proto‐
1193       cols use no authentication.
1194
1195       In addition to combining these sideband protocols  with  the  main  NFS
1196       protocol,  NFS  version 4 introduces more advanced forms of access con‐
1197       trol, authentication, and in-transit data protection.  The NFS  version
1198       4 specification mandates support for strong authentication and security
1199       flavors that provide per-RPC integrity checking  and  encryption.   Be‐
1200       cause  NFS  version  4  combines the function of the sideband protocols
1201       into the main NFS protocol, the new security features apply to all  NFS
1202       version  4  operations  including  mounting,  file  locking, and so on.
1203       RPCGSS authentication can also be used with NFS versions 2 and  3,  but
1204       it does not protect their sideband protocols.
1205
1206       The  sec mount option specifies the security flavor used for operations
1207       on behalf of users on that NFS mount point.  Specifying  sec=krb5  pro‐
1208       vides  cryptographic  proof  of  a user's identity in each RPC request.
1209       This provides strong verification of the identity  of  users  accessing
1210       data  on the server.  Note that additional configuration besides adding
1211       this mount option is required in order  to  enable  Kerberos  security.
1212       Refer to the rpc.gssd(8) man page for details.
1213
1214       Two  additional  flavors  of Kerberos security are supported: krb5i and
1215       krb5p.  The krb5i security flavor provides a  cryptographically  strong
1216       guarantee that the data in each RPC request has not been tampered with.
1217       The krb5p security flavor encrypts every RPC request  to  prevent  data
1218       exposure  during  network transit; however, expect some performance im‐
1219       pact when using integrity checking or encryption.  Similar support  for
1220       other forms of cryptographic security is also available.
1221
1222   NFS version 4 filesystem crossing
1223       The  NFS version 4 protocol allows a client to renegotiate the security
1224       flavor when the client crosses into a new  filesystem  on  the  server.
1225       The  newly  negotiated flavor effects only accesses of the new filesys‐
1226       tem.
1227
1228       Such negotiation typically occurs when a client crosses from a server's
1229       pseudo-fs into one of the server's exported physical filesystems, which
1230       often have more restrictive security settings than the pseudo-fs.
1231
1232   NFS version 4 Leases
1233       In NFS version 4, a lease is a period during which a server irrevocably
1234       grants a client file locks.  Once the lease expires, the server may re‐
1235       voke those locks.  Clients periodically renew their leases  to  prevent
1236       lock revocation.
1237
1238       After  an  NFS  version  4 server reboots, each client tells the server
1239       about existing file open and lock state under its lease  before  opera‐
1240       tion  can continue.  If a client reboots, the server frees all open and
1241       lock state associated with that client's lease.
1242
1243       When establishing a lease, therefore, a client must identify itself  to
1244       a  server.  Each client presents an arbitrary string to distinguish it‐
1245       self from other clients.  The client administrator can  supplement  the
1246       default  identity string using the nfs4.nfs4_unique_id module parameter
1247       to avoid collisions with other client identity strings.
1248
1249       A client also uses a unique security flavor and principal when  it  es‐
1250       tablishes  its lease.  If two clients present the same identity string,
1251       a server can use client principals to distinguish  between  them,  thus
1252       securely preventing one client from interfering with the other's lease.
1253
1254       The  Linux  NFS  client  establishes  one  lease  on each NFS version 4
1255       server.  Lease management operations, such as lease  renewal,  are  not
1256       done on behalf of a particular file, lock, user, or mount point, but on
1257       behalf of the client that owns that lease.  A client uses a  consistent
1258       identity  string,  security flavor, and principal across client reboots
1259       to ensure that the server can promptly reap expired lease state.
1260
1261       When Kerberos is configured on a Linux NFS client  (i.e.,  there  is  a
1262       /etc/krb5.keytab on that client), the client attempts to use a Kerberos
1263       security flavor for its lease management operations.  Kerberos provides
1264       secure  authentication of each client.  By default, the client uses the
1265       host/ or nfs/ service principal in its /etc/krb5.keytab for  this  pur‐
1266       pose, as described in rpc.gssd(8).
1267
1268       If  the  client has Kerberos configured, but the server does not, or if
1269       the client does not have a keytab or the requisite service  principals,
1270       the client uses AUTH_SYS and UID 0 for lease management.
1271
1272   Using non-privileged source ports
1273       NFS  clients  usually communicate with NFS servers via network sockets.
1274       Each end of a socket is assigned a port value, which is simply a number
1275       between  1 and 65535 that distinguishes socket endpoints at the same IP
1276       address.  A socket is uniquely defined by a  tuple  that  includes  the
1277       transport protocol (TCP or UDP) and the port values and IP addresses of
1278       both endpoints.
1279
1280       The NFS client can choose any source port value for  its  sockets,  but
1281       usually  chooses  a privileged port.  A privileged port is a port value
1282       less than 1024.  Only a process  with  root  privileges  may  create  a
1283       socket with a privileged source port.
1284
1285       The exact range of privileged source ports that can be chosen is set by
1286       a pair of sysctls to avoid choosing a well-known port, such as the port
1287       used  by  ssh.  This means the number of source ports available for the
1288       NFS client, and therefore the number of socket connections that can  be
1289       used at the same time, is practically limited to only a few hundred.
1290
1291       As  described above, the traditional default NFS authentication scheme,
1292       known as AUTH_SYS, relies on sending local UID and GID numbers to iden‐
1293       tify  users  making NFS requests.  An NFS server assumes that if a con‐
1294       nection comes from a privileged port, the UID and GID  numbers  in  the
1295       NFS requests on this connection have been verified by the client's ker‐
1296       nel or some other local authority.  This is an easy  system  to  spoof,
1297       but on a trusted physical network between trusted hosts, it is entirely
1298       adequate.
1299
1300       Roughly speaking, one socket is used for each NFS mount  point.   If  a
1301       client  could  use  non-privileged  source ports as well, the number of
1302       sockets allowed, and  thus  the  maximum  number  of  concurrent  mount
1303       points, would be much larger.
1304
1305       Using  non-privileged source ports may compromise server security some‐
1306       what, since any user on AUTH_SYS mount points can now pretend to be any
1307       other  when  making NFS requests.  Thus NFS servers do not support this
1308       by default.  They explicitly allow it usually via an export option.
1309
1310       To retain good security while allowing as many mount points  as  possi‐
1311       ble,  it is best to allow non-privileged client connections only if the
1312       server and client both require strong authentication, such as Kerberos.
1313
1314   Mounting through a firewall
1315       A firewall may reside between an NFS client and server, or  the  client
1316       or  server  may block some of its own ports via IP filter rules.  It is
1317       still possible to mount an NFS server through a firewall,  though  some
1318       of  the  mount(8) command's automatic service endpoint discovery mecha‐
1319       nisms may not work; this requires you to provide specific endpoint  de‐
1320       tails via NFS mount options.
1321
1322       NFS  servers  normally  run a portmapper or rpcbind daemon to advertise
1323       their service endpoints to clients. Clients use the rpcbind  daemon  to
1324       determine:
1325
1326              What network port each RPC-based service is using
1327
1328              What transport protocols each RPC-based service supports
1329
1330       The  rpcbind daemon uses a well-known port number (111) to help clients
1331       find a service endpoint.  Although NFS often uses a standard port  num‐
1332       ber  (2049),  auxiliary services such as the NLM service can choose any
1333       unused port number at random.
1334
1335       Common firewall configurations block the well-known rpcbind  port.   In
1336       the  absense  of an rpcbind service, the server administrator fixes the
1337       port number of NFS-related services so that the firewall can allow  ac‐
1338       cess to specific NFS service ports.  Client administrators then specify
1339       the port number for the  mountd  service  via  the  mount(8)  command's
1340       mountport  option.   It may also be necessary to enforce the use of TCP
1341       or UDP if the firewall blocks one of those transports.
1342
1343   NFS Access Control Lists
1344       Solaris allows NFS version 3 clients direct access to POSIX Access Con‐
1345       trol Lists stored in its local file systems.  This proprietary sideband
1346       protocol, known as NFSACL, provides richer  access  control  than  mode
1347       bits.   Linux  implements  this protocol for compatibility with the So‐
1348       laris NFS implementation.  The NFSACL protocol never became a  standard
1349       part of the NFS version 3 specification, however.
1350
1351       The  NFS  version 4 specification mandates a new version of Access Con‐
1352       trol Lists that are semantically richer than POSIX ACLs.  NFS version 4
1353       ACLs  are  not fully compatible with POSIX ACLs; as such, some transla‐
1354       tion between the two is required in an  environment  that  mixes  POSIX
1355       ACLs and NFS version 4.
1356

THE REMOUNT OPTION

1358       Generic  mount options such as rw and sync can be modified on NFS mount
1359       points using the remount option.  See mount(8) for more information  on
1360       generic mount options.
1361
1362       With  few  exceptions, NFS-specific options are not able to be modified
1363       during a remount.  The underlying transport or NFS  version  cannot  be
1364       changed by a remount, for example.
1365
1366       Performing a remount on an NFS file system mounted with the noac option
1367       may have unintended consequences.  The noac option is a combination  of
1368       the generic option sync, and the NFS-specific option actimeo=0.
1369
1370   Unmounting after a remount
1371       For  mount  points that use NFS versions 2 or 3, the NFS umount subcom‐
1372       mand depends on knowing the original set of mount options used to  per‐
1373       form  the  MNT  operation.  These options are stored on disk by the NFS
1374       mount subcommand, and can be erased by a remount.
1375
1376       To ensure that the saved mount options are not erased during a remount,
1377       specify  either  the  local mount directory, or the server hostname and
1378       export pathname, but not both, during a remount.  For example,
1379
1380               mount -o remount,ro /mnt
1381
1382       merges the mount option ro with the mount options already saved on disk
1383       for the NFS server mounted at /mnt.
1384

FILES

1386       /etc/fstab     file system table
1387
1388       /etc/nfsmount.conf
1389                      Configuration file for NFS mounts
1390

NOTES

1392       Before 2.4.7, the Linux NFS client did not support NFS over TCP.
1393
1394       Before  2.4.20,  the  Linux  NFS  client  used a heuristic to determine
1395       whether cached file data was still valid rather than using the standard
1396       close-to-open cache coherency method described above.
1397
1398       Starting with 2.4.22, the Linux NFS client employs a Van Jacobsen-based
1399       RTT estimator to determine retransmit timeout  values  when  using  NFS
1400       over UDP.
1401
1402       Before 2.6.0, the Linux NFS client did not support NFS version 4.
1403
1404       Before  2.6.8,  the  Linux  NFS  client used only synchronous reads and
1405       writes when the rsize and wsize settings were smaller than the system's
1406       page size.
1407
1408       The  Linux client's support for protocol versions depend on whether the
1409       kernel  was  built  with  options  CONFIG_NFS_V2,  CONFIG_NFS_V3,  CON‐
1410       FIG_NFS_V4, CONFIG_NFS_V4_1, and CONFIG_NFS_V4_2.
1411

SEE ALSO

1413       fstab(5), mount(8), umount(8), mount.nfs(5), umount.nfs(5), exports(5),
1414       nfsmount.conf(5),   netconfig(5),   ipv6(7),   nfsd(8),   sm-notify(8),
1415       rpc.statd(8), rpc.idmapd(8), rpc.gssd(8), rpc.svcgssd(8), kerberos(1)
1416
1417       RFC 768 for the UDP specification.
1418       RFC 793 for the TCP specification.
1419       RFC 1813 for the NFS version 3 specification.
1420       RFC 1832 for the XDR specification.
1421       RFC 1833 for the RPC bind specification.
1422       RFC 2203 for the RPCSEC GSS API protocol specification.
1423       RFC 7530 for the NFS version 4.0 specification.
1424       RFC 5661 for the NFS version 4.1 specification.
1425       RFC 7862 for the NFS version 4.2 specification.
1426
1427
1428
1429                                9 October 2012                          NFS(5)
Impressum