1NFS(5)                        File Formats Manual                       NFS(5)
2
3
4

NAME

6       nfs - fstab format and options for the nfs file systems
7

SYNOPSIS

9       /etc/fstab
10

DESCRIPTION

12       NFS  is  an  Internet  Standard protocol created by Sun Microsystems in
13       1984. NFS was developed to allow file sharing between systems  residing
14       on  a local area network.  Depending on kernel configuration, the Linux
15       NFS client may support NFS versions 3, 4.0, 4.1, or 4.2.
16
17       The mount(8) command attaches a file system to the system's name  space
18       hierarchy  at  a  given mount point.  The /etc/fstab file describes how
19       mount(8) should assemble a system's file name  hierarchy  from  various
20       independent  file  systems  (including  file  systems  exported  by NFS
21       servers).  Each line in the /etc/fstab file  describes  a  single  file
22       system,  its  mount  point, and a set of default mount options for that
23       mount point.
24
25       For NFS file system mounts, a line in the /etc/fstab file specifies the
26       server  name,  the path name of the exported server directory to mount,
27       the local directory that is the mount point, the type  of  file  system
28       that is being mounted, and a list of mount options that control the way
29       the filesystem is mounted and how the NFS client behaves when accessing
30       files on this mount point.  The fifth and sixth fields on each line are
31       not used by NFS, thus conventionally each contain the digit  zero.  For
32       example:
33
34               server:path   /mountpoint   fstype   option,option,...   0 0
35
36       The  server's  hostname  and  export pathname are separated by a colon,
37       while the mount options are separated by commas. The  remaining  fields
38       are separated by blanks or tabs.
39
40       The server's hostname can be an unqualified hostname, a fully qualified
41       domain name, a dotted quad IPv4 address, or an IPv6 address enclosed in
42       square  brackets.  Link-local and site-local IPv6 addresses must be ac‐
43       companied by an interface identifier.  See ipv6(7) for details on spec‐
44       ifying raw IPv6 addresses.
45
46       The  fstype  field  contains  "nfs".   Use  of  the  "nfs4"  fstype  in
47       /etc/fstab is deprecated.
48

MOUNT OPTIONS

50       Refer to mount(8) for a description of generic mount options  available
51       for  all file systems. If you do not need to specify any mount options,
52       use the generic option defaults in /etc/fstab.
53
54   Options supported by all versions
55       These options are valid to use with any NFS version.
56
57       nfsvers=n      The NFS protocol version  number  used  to  contact  the
58                      server's  NFS  service.   If the server does not support
59                      the requested version, the mount request fails.  If this
60                      option  is  not  specified, the client tries version 4.2
61                      first, then negotiates down until  it  finds  a  version
62                      supported by the server.
63
64       vers=n         This option is an alternative to the nfsvers option.  It
65                      is included for compatibility with other operating  sys‐
66                      tems
67
68       soft / softerr / hard
69                      Determines the recovery behavior of the NFS client after
70                      an NFS request times out.  If no option is specified (or
71                      if  the  hard option is specified), NFS requests are re‐
72                      tried indefinitely.  If either the soft or  softerr  op‐
73                      tion  is specified, then the NFS client fails an NFS re‐
74                      quest after  retrans  retransmissions  have  been  sent,
75                      causing  the  NFS  client to return either the error EIO
76                      (for the soft option) or ETIMEDOUT (for the softerr  op‐
77                      tion) to the calling application.
78
79                      NB:  A  so-called  "soft"  timeout can cause silent data
80                      corruption in certain cases. As such, use  the  soft  or
81                      softerr  option  only when client responsiveness is more
82                      important than data integrity.  Using NFS  over  TCP  or
83                      increasing  the value of the retrans option may mitigate
84                      some of the risks of using the soft or softerr option.
85
86       softreval / nosoftreval
87                      In cases where the NFS server is down, it may be  useful
88                      to  allow  the  NFS client to continue to serve up paths
89                      and attributes from  cache  after  retrans  attempts  to
90                      revalidate that cache have timed out.  This may, for in‐
91                      stance, be helpful when trying to unmount  a  filesystem
92                      tree from a server that is permanently down.
93
94                      It  is possible to combine softreval with the soft mount
95                      option, in which case operations that cannot  be  served
96                      up  from  cache  will time out and return an error after
97                      retrans attempts. The combination with the default  hard
98                      mount option implies those uncached operations will con‐
99                      tinue to retry until a response  is  received  from  the
100                      server.
101
102                      Note: the default mount option is nosoftreval which dis‐
103                      allows fallback to cache when  revalidation  fails,  and
104                      instead  follows  the  behavior  dictated by the hard or
105                      soft mount option.
106
107       intr / nointr  This option is provided for backward compatibility.   It
108                      is ignored after kernel 2.6.25.
109
110       timeo=n        The  time  in  deciseconds  (tenths of a second) the NFS
111                      client waits for a response before it retries an NFS re‐
112                      quest.
113
114                      For NFS over TCP the default timeo value is 600 (60 sec‐
115                      onds).  The NFS client performs  linear  backoff:  After
116                      each retransmission the timeout is increased by timeo up
117                      to the maximum of 600 seconds.
118
119                      However, for NFS over UDP, the client uses  an  adaptive
120                      algorithm  to  estimate an appropriate timeout value for
121                      frequently used request types (such as  READ  and  WRITE
122                      requests),  but  uses the timeo setting for infrequently
123                      used request types (such as FSINFO  requests).   If  the
124                      timeo option is not specified, infrequently used request
125                      types are retried after 1.1  seconds.   After  each  re‐
126                      transmission,  the  NFS  client  doubles the timeout for
127                      that request, up to a maximum timeout length of 60  sec‐
128                      onds.
129
130       retrans=n      The number of times the NFS client retries a request be‐
131                      fore it attempts further recovery action. If the retrans
132                      option  is  not specified, the NFS client tries each UDP
133                      request three times and each TCP request twice.
134
135                      The NFS client generates a "server not responding"  mes‐
136                      sage after retrans retries, then attempts further recov‐
137                      ery (depending on whether the hard mount  option  is  in
138                      effect).
139
140       rsize=n        The maximum number of bytes in each network READ request
141                      that the NFS client can receive when reading data from a
142                      file  on an NFS server.  The actual data payload size of
143                      each NFS READ request is equal to or  smaller  than  the
144                      rsize setting. The largest read payload supported by the
145                      Linux NFS client is 1,048,576 bytes (one megabyte).
146
147                      The rsize value is a positive integral multiple of 1024.
148                      Specified rsize values lower than 1024 are replaced with
149                      4096; values  larger  than  1048576  are  replaced  with
150                      1048576.  If  a  specified value is within the supported
151                      range but not a multiple of 1024, it is rounded down  to
152                      the nearest multiple of 1024.
153
154                      If  an rsize value is not specified, or if the specified
155                      rsize value is  larger  than  the  maximum  that  either
156                      client  or server can support, the client and server ne‐
157                      gotiate the largest rsize value that they can both  sup‐
158                      port.
159
160                      The rsize mount option as specified on the mount(8) com‐
161                      mand line appears in the /etc/mtab  file.  However,  the
162                      effective  rsize  value  negotiated  by  the  client and
163                      server is reported in the /proc/mounts file.
164
165       wsize=n        The maximum number of bytes per  network  WRITE  request
166                      that the NFS client can send when writing data to a file
167                      on an NFS server. The actual data payload size  of  each
168                      NFS  WRITE request is equal to or smaller than the wsize
169                      setting. The largest  write  payload  supported  by  the
170                      Linux NFS client is 1,048,576 bytes (one megabyte).
171
172                      Similar  to  rsize , the wsize value is a positive inte‐
173                      gral multiple of 1024.   Specified  wsize  values  lower
174                      than  1024  are  replaced  with 4096; values larger than
175                      1048576 are replaced with 1048576. If a specified  value
176                      is  within  the  supported  range  but not a multiple of
177                      1024, it is rounded down  to  the  nearest  multiple  of
178                      1024.
179
180                      If  a  wsize value is not specified, or if the specified
181                      wsize value is  larger  than  the  maximum  that  either
182                      client  or server can support, the client and server ne‐
183                      gotiate the largest wsize value that they can both  sup‐
184                      port.
185
186                      The wsize mount option as specified on the mount(8) com‐
187                      mand line appears in the /etc/mtab  file.  However,  the
188                      effective  wsize  value  negotiated  by  the  client and
189                      server is reported in the /proc/mounts file.
190
191       ac / noac      Selects whether the client may cache file attributes. If
192                      neither option is specified (or if ac is specified), the
193                      client caches file attributes.
194
195                      To improve  performance,  NFS  clients  cache  file  at‐
196                      tributes.  Every  few  seconds, an NFS client checks the
197                      server's version of each file's attributes for  updates.
198                      Changes  that  occur on the server in those small inter‐
199                      vals remain  undetected  until  the  client  checks  the
200                      server  again.  The  noac  option  prevents clients from
201                      caching file attributes so that  applications  can  more
202                      quickly detect file changes on the server.
203
204                      In  addition  to preventing the client from caching file
205                      attributes, the noac option forces application writes to
206                      become  synchronous  so that local changes to a file be‐
207                      come visible on the server immediately.  That way, other
208                      clients can quickly detect recent writes when they check
209                      the file's attributes.
210
211                      Using the noac option provides greater  cache  coherence
212                      among  NFS  clients accessing the same files, but it ex‐
213                      tracts a significant performance penalty.  As such,  ju‐
214                      dicious  use of file locking is encouraged instead.  The
215                      DATA AND METADATA COHERENCE section contains a  detailed
216                      discussion of these trade-offs.
217
218       acregmin=n     The minimum time (in seconds) that the NFS client caches
219                      attributes of a regular file before  it  requests  fresh
220                      attribute  information from a server.  If this option is
221                      not specified, the NFS client uses a  3-second  minimum.
222                      See  the  DATA AND METADATA COHERENCE section for a full
223                      discussion of attribute caching.
224
225       acregmax=n     The maximum time (in seconds) that the NFS client caches
226                      attributes  of  a  regular file before it requests fresh
227                      attribute information from a server.  If this option  is
228                      not  specified, the NFS client uses a 60-second maximum.
229                      See the DATA AND METADATA COHERENCE section for  a  full
230                      discussion of attribute caching.
231
232       acdirmin=n     The minimum time (in seconds) that the NFS client caches
233                      attributes of a directory before it requests  fresh  at‐
234                      tribute  information  from  a server.  If this option is
235                      not specified, the NFS client uses a 30-second  minimum.
236                      See  the  DATA AND METADATA COHERENCE section for a full
237                      discussion of attribute caching.
238
239       acdirmax=n     The maximum time (in seconds) that the NFS client caches
240                      attributes  of  a directory before it requests fresh at‐
241                      tribute information from a server.  If  this  option  is
242                      not  specified, the NFS client uses a 60-second maximum.
243                      See the DATA AND METADATA COHERENCE section for  a  full
244                      discussion of attribute caching.
245
246       actimeo=n      Using  actimeo sets all of acregmin, acregmax, acdirmin,
247                      and acdirmax to the same value.  If this option  is  not
248                      specified,  the NFS client uses the defaults for each of
249                      these options listed above.
250
251       bg / fg        Determines how the mount(8) command behaves  if  an  at‐
252                      tempt  to  mount  an export fails.  The fg option causes
253                      mount(8) to exit with an error status if any part of the
254                      mount  request  times  out  or  fails outright.  This is
255                      called a "foreground" mount, and is the default behavior
256                      if neither the fg nor bg mount option is specified.
257
258                      If  the  bg  option  is  specified, a timeout or failure
259                      causes the mount(8) command to fork a child  which  con‐
260                      tinues to attempt to mount the export.  The parent imme‐
261                      diately returns with a zero exit code.  This is known as
262                      a "background" mount.
263
264                      If  the  local  mount  point  directory  is missing, the
265                      mount(8) command acts as if the mount request timed out.
266                      This  permits  nested NFS mounts specified in /etc/fstab
267                      to proceed in any order  during  system  initialization,
268                      even  if some NFS servers are not yet available.  Alter‐
269                      natively these issues can be addressed  using  an  auto‐
270                      mounter (refer to automount(8) for details).
271
272       nconnect=n     When  using  a connection oriented protocol such as TCP,
273                      it may sometimes be advantageous to set up multiple con‐
274                      nections between the client and server. For instance, if
275                      your clients and/or servers are equipped  with  multiple
276                      network  interface  cards (NICs), using multiple connec‐
277                      tions to spread the load  may  improve  overall  perfor‐
278                      mance.   In  such  cases, the nconnect option allows the
279                      user to specify the number of connections that should be
280                      established  between the client and server up to a limit
281                      of 16.
282
283                      Note that the nconnect option may also be used  by  some
284                      pNFS drivers to decide how many connections to set up to
285                      the data servers.
286
287       rdirplus / nordirplus
288                      Selects whether to use NFS  v3  or  v4  READDIRPLUS  re‐
289                      quests.  If this option is not specified, the NFS client
290                      uses READDIRPLUS requests on NFS v3 or v4 mounts to read
291                      small  directories.  Some applications perform better if
292                      the client uses only READDIR requests for  all  directo‐
293                      ries.
294
295       retry=n        The  number of minutes that the mount(8) command retries
296                      an NFS mount operation in the foreground  or  background
297                      before  giving up.  If this option is not specified, the
298                      default value for foreground mounts is  2  minutes,  and
299                      the default value for background mounts is 10000 minutes
300                      (80 minutes shy of one week).  If a  value  of  zero  is
301                      specified,  the mount(8) command exits immediately after
302                      the first failure.
303
304                      Note that this only affects how many  retries  are  made
305                      and  doesn't affect the delay caused by each retry.  For
306                      UDP each retry takes the time determined  by  the  timeo
307                      and  retrans  options,  which by default will be about 7
308                      seconds.  For TCP the default is 3 minutes,  but  system
309                      TCP connection timeouts will sometimes limit the timeout
310                      of each retransmission to around 2 minutes.
311
312       sec=flavors    A colon-separated list of one or more  security  flavors
313                      to use for accessing files on the mounted export. If the
314                      server does not support any of these flavors, the  mount
315                      operation  fails.   If sec= is not specified, the client
316                      attempts to find a security flavor that both the  client
317                      and  the  server supports.  Valid flavors are none, sys,
318                      krb5, krb5i, and krb5p.  Refer to the SECURITY CONSIDER‐
319                      ATIONS section for details.
320
321       sharecache / nosharecache
322                      Determines  how  the  client's  data cache and attribute
323                      cache are shared when mounting the same export more than
324                      once  concurrently.  Using the same cache reduces memory
325                      requirements on the client and presents  identical  file
326                      contents  to  applications  when the same remote file is
327                      accessed via different mount points.
328
329                      If neither option is specified, or if the sharecache op‐
330                      tion  is  specified, then a single cache is used for all
331                      mount points  that  access  the  same  export.   If  the
332                      nosharecache  option is specified, then that mount point
333                      gets a unique cache.  Note that when data and  attribute
334                      caches  are  shared,  the  mount  options from the first
335                      mount point take effect for subsequent concurrent mounts
336                      of the same export.
337
338                      As  of kernel 2.6.18, the behavior specified by noshare‐
339                      cache is legacy caching behavior. This is  considered  a
340                      data  risk since multiple cached copies of the same file
341                      on the same client can become out of  sync  following  a
342                      local update of one of the copies.
343
344       resvport / noresvport
345                      Specifies whether the NFS client should use a privileged
346                      source port when communicating with an  NFS  server  for
347                      this  mount  point.  If this option is not specified, or
348                      the resvport option is specified, the NFS client uses  a
349                      privileged  source  port.   If  the noresvport option is
350                      specified, the NFS client uses a  non-privileged  source
351                      port.   This  option  is supported in kernels 2.6.28 and
352                      later.
353
354                      Using non-privileged source  ports  helps  increase  the
355                      maximum  number of NFS mount points allowed on a client,
356                      but NFS servers must be configured to allow  clients  to
357                      connect via non-privileged source ports.
358
359                      Refer  to the SECURITY CONSIDERATIONS section for impor‐
360                      tant details.
361
362       lookupcache=mode
363                      Specifies how the kernel manages its cache of  directory
364                      entries  for  a  given  mount point.  mode can be one of
365                      all, none, pos, or positive.  This option  is  supported
366                      in kernels 2.6.28 and later.
367
368                      The Linux NFS client caches the result of all NFS LOOKUP
369                      requests.  If the requested directory  entry  exists  on
370                      the  server,  the result is referred to as positive.  If
371                      the requested directory entry  does  not  exist  on  the
372                      server, the result is referred to as negative.
373
374                      If this option is not specified, or if all is specified,
375                      the client assumes both types of directory cache entries
376                      are  valid  until  their  parent  directory's cached at‐
377                      tributes expire.
378
379                      If pos or positive is specified, the client assumes pos‐
380                      itive  entries  are valid until their parent directory's
381                      cached attributes expire, but always  revalidates  nega‐
382                      tive entires before an application can use them.
383
384                      If  none is specified, the client revalidates both types
385                      of directory cache entries before an application can use
386                      them.   This  permits quick detection of files that were
387                      created or removed by other clients, but can impact  ap‐
388                      plication and server performance.
389
390                      The  DATA  AND METADATA COHERENCE section contains a de‐
391                      tailed discussion of these trade-offs.
392
393       fsc / nofsc    Enable/Disables the cache of (read-only) data  pages  to
394                      the   local   disk  using  the  FS-Cache  facility.  See
395                      cachefilesd(8)      and       <kernel_source>/Documenta‐
396                      tion/filesystems/caching  for detail on how to configure
397                      the FS-Cache facility.  Default value is nofsc.
398
399       sloppy         The  sloppy  option  is  an  alternative  to  specifying
400                      mount.nfs -s option.
401
402       xprtsec=policy Specifies the use of transport layer security to protect
403                      NFS network traffic on behalf of this mount point.  pol‐
404                      icy can be one of none, tls, or mtls.
405
406                      If none is specified, transport layer security is forced
407                      off, even if the NFS server supports transport layer se‐
408                      curity.   If tls is specified, the client uses RPC-with-
409                      TLS to provide in-transit confidentiality.  If  mtls  is
410                      specified,  the client uses RPC-with-TLS to authenticate
411                      itself and to provide  in-transit  confidentiality.   If
412                      the server does not support RPC-with-TLS or peer authen‐
413                      tication fails, the mount attempt fails.
414
415                      If the xprtsec= option is not specified, the default be‐
416                      havior  depends on the kernel, but is usually equivalent
417                      to xprtsec=none.
418
419       write=behavior Controls how the NFS client handles the write(2)  system
420                      call.   behavior can be one of lazy, eager, or wait.  If
421                      lazy (the default) is specified, then the NFS client de‐
422                      lays sending application writes to the NFS server as de‐
423                      scribed in the DATA AND METADATA COHERENCE section.   If
424                      eager  is  specified,  then the NFS client sends off the
425                      write immediately  as  an  unstable  WRITE  to  the  NFS
426                      server.  If wait is specified, then the NFS client sends
427                      off the write immediately as an unstable  WRITE  to  the
428                      NFS server and then waits for the reply.
429
430   Options for NFS versions 2 and 3 only
431       Use  these options, along with the options in the above subsection, for
432       NFS versions 2 and 3 only.
433
434       proto=netid    The netid determines the transport that is used to  com‐
435                      municate  with  the  NFS  server.  Available options are
436                      udp, udp6, tcp, tcp6, rdma, and rdma6.  Those which  end
437                      in  6  use IPv6 addresses and are only available if sup‐
438                      port for TI-RPC is built in. Others use IPv4 addresses.
439
440                      Each transport protocol uses different  default  retrans
441                      and  timeo  settings.  Refer to the description of these
442                      two mount options for details.
443
444                      In addition to controlling how the NFS client  transmits
445                      requests  to the server, this mount option also controls
446                      how the mount(8) command communicates with the  server's
447                      rpcbind  and  mountd  services.  Specifying a netid that
448                      uses TCP forces all traffic from  the  mount(8)  command
449                      and  the NFS client to use TCP.  Specifying a netid that
450                      uses UDP forces all traffic types to use UDP.
451
452                      Before using NFS over UDP, refer to the TRANSPORT  METH‐
453                      ODS section.
454
455                      If the proto mount option is not specified, the mount(8)
456                      command discovers which protocols  the  server  supports
457                      and  chooses  an appropriate transport for each service.
458                      Refer to the TRANSPORT METHODS section for more details.
459
460       udp            The  udp  option  is  an   alternative   to   specifying
461                      proto=udp.   It is included for compatibility with other
462                      operating systems.
463
464                      Before using NFS over UDP, refer to the TRANSPORT  METH‐
465                      ODS section.
466
467       tcp            The   tcp   option   is  an  alternative  to  specifying
468                      proto=tcp.  It is included for compatibility with  other
469                      operating systems.
470
471       rdma           The   rdma   option  is  an  alternative  to  specifying
472                      proto=rdma.
473
474       port=n         The numeric value of the server's NFS service port.   If
475                      the  server's NFS service is not available on the speci‐
476                      fied port, the mount request fails.
477
478                      If this option is not specified,  or  if  the  specified
479                      port  value  is 0, then the NFS client uses the NFS ser‐
480                      vice port number advertised by the server's rpcbind ser‐
481                      vice.   The  mount request fails if the server's rpcbind
482                      service is not available, the server's  NFS  service  is
483                      not registered with its rpcbind service, or the server's
484                      NFS service is not available on the advertised port.
485
486       mountport=n    The numeric value of the server's mountd port.   If  the
487                      server's  mountd  service is not available on the speci‐
488                      fied port, the mount request fails.
489
490                      If this option is not specified,  or  if  the  specified
491                      port  value  is  0,  then  the mount(8) command uses the
492                      mountd service port number advertised  by  the  server's
493                      rpcbind   service.   The  mount  request  fails  if  the
494                      server's rpcbind service is not available, the  server's
495                      mountd  service  is not registered with its rpcbind ser‐
496                      vice, or the server's mountd service is not available on
497                      the advertised port.
498
499                      This  option  can  be  used  when mounting an NFS server
500                      through a firewall that blocks the rpcbind protocol.
501
502       mountproto=netid
503                      The transport the NFS client uses to  transmit  requests
504                      to  the NFS server's mountd service when performing this
505                      mount request, and  when  later  unmounting  this  mount
506                      point.
507
508                      netid  may be one of udp, and tcp which use IPv4 address
509                      or, if TI-RPC is built into the mount.nfs command, udp6,
510                      and tcp6 which use IPv6 addresses.
511
512                      This  option  can  be  used  when mounting an NFS server
513                      through a firewall that blocks a  particular  transport.
514                      When  used in combination with the proto option, differ‐
515                      ent transports for mountd requests and NFS requests  can
516                      be  specified.   If  the  server's mountd service is not
517                      available via the specified transport, the mount request
518                      fails.
519
520                      Refer  to  the TRANSPORT METHODS section for more on how
521                      the mountproto mount option  interacts  with  the  proto
522                      mount option.
523
524       mounthost=name The hostname of the host running mountd.  If this option
525                      is not specified, the mount(8) command assumes that  the
526                      mountd service runs on the same host as the NFS service.
527
528       mountvers=n    The  RPC  version  number  used  to contact the server's
529                      mountd.  If this option is  not  specified,  the  client
530                      uses  a  version number appropriate to the requested NFS
531                      version.  This option is useful when multiple  NFS  ser‐
532                      vices are running on the same remote server host.
533
534       namlen=n       The  maximum  length  of  a  pathname  component on this
535                      mount.  If this option is  not  specified,  the  maximum
536                      length  is  negotiated  with  the server. In most cases,
537                      this maximum length is 255 characters.
538
539                      Some early versions of NFS did not support this negotia‐
540                      tion.   Using  this  option ensures that pathconf(3) re‐
541                      ports the proper maximum component  length  to  applica‐
542                      tions in such cases.
543
544       lock / nolock  Selects whether to use the NLM sideband protocol to lock
545                      files on the server.  If neither option is specified (or
546                      if  lock  is  specified),  NLM  locking is used for this
547                      mount point.  When using the nolock option, applications
548                      can  lock  files,  but such locks provide exclusion only
549                      against other applications running on the  same  client.
550                      Remote applications are not affected by these locks.
551
552                      NLM locking must be disabled with the nolock option when
553                      using NFS to mount /var because /var contains files used
554                      by  the  NLM  implementation on Linux.  Using the nolock
555                      option is also required when  mounting  exports  on  NFS
556                      servers that do not support the NLM protocol.
557
558       cto / nocto    Selects whether to use close-to-open cache coherence se‐
559                      mantics.  If neither option is specified (or if  cto  is
560                      specified),  the  client uses close-to-open cache coher‐
561                      ence semantics. If the nocto option  is  specified,  the
562                      client  uses  a non-standard heuristic to determine when
563                      files on the server have changed.
564
565                      Using the nocto option may improve performance for read-
566                      only  mounts, but should be used only if the data on the
567                      server changes only occasionally.  The DATA AND METADATA
568                      COHERENCE  section discusses the behavior of this option
569                      in more detail.
570
571       acl / noacl    Selects whether to use the NFSACL sideband  protocol  on
572                      this  mount  point.   The  NFSACL sideband protocol is a
573                      proprietary protocol implemented in Solaris that manages
574                      Access  Control  Lists. NFSACL was never made a standard
575                      part of the NFS protocol specification.
576
577                      If neither acl nor noacl option is  specified,  the  NFS
578                      client  negotiates  with the server to see if the NFSACL
579                      protocol is supported, and uses it if  the  server  sup‐
580                      ports it.  Disabling the NFSACL sideband protocol may be
581                      necessary if the  negotiation  causes  problems  on  the
582                      client  or server.  Refer to the SECURITY CONSIDERATIONS
583                      section for more details.
584
585       local_lock=mechanism
586                      Specifies whether to use local locking for any  or  both
587                      of  the  flock and the POSIX locking mechanisms.  mecha‐
588                      nism can be one of all, flock, posix, or none.  This op‐
589                      tion is supported in kernels 2.6.37 and later.
590
591                      The Linux NFS client provides a way to make locks local.
592                      This means, the applications can lock  files,  but  such
593                      locks  provide exclusion only against other applications
594                      running on the same client. Remote applications are  not
595                      affected by these locks.
596
597                      If  this  option  is not specified, or if none is speci‐
598                      fied, the client assumes that the locks are not local.
599
600                      If all is specified, the client assumes that both  flock
601                      and POSIX locks are local.
602
603                      If  flock  is  specified,  the  client assumes that only
604                      flock locks are local and uses NLM sideband protocol  to
605                      lock files when POSIX locks are used.
606
607                      If  posix  is  specified,  the client assumes that POSIX
608                      locks are local and uses NLM sideband protocol  to  lock
609                      files when flock locks are used.
610
611                      To  support legacy flock behavior similar to that of NFS
612                      clients < 2.6.12, use 'local_lock=flock'. This option is
613                      required  when  exporting  NFS mounts via Samba as Samba
614                      maps Windows  share  mode  locks  as  flock.  Since  NFS
615                      clients  >  2.6.12  implement  flock  by emulating POSIX
616                      locks, this will result in conflicting locks.
617
618                      NOTE: When used together, the 'local_lock' mount  option
619                      will be overridden by 'nolock'/'lock' mount option.
620
621   Options for NFS version 4 only
622       Use  these  options,  along  with  the  options in the first subsection
623       above, for NFS version 4.0 and newer.
624
625       proto=netid    The netid determines the transport that is used to  com‐
626                      municate  with  the  NFS  server.  Supported options are
627                      tcp, tcp6, rdma, and rdma6.  tcp6 use IPv6 addresses and
628                      is  only  available  if  support for TI-RPC is built in.
629                      Both others use IPv4 addresses.
630
631                      All NFS version 4 servers are required to  support  TCP,
632                      so  if  this mount option is not specified, the NFS ver‐
633                      sion 4 client uses  the  TCP  protocol.   Refer  to  the
634                      TRANSPORT METHODS section for more details.
635
636       minorversion=n Specifies  the protocol minor version number.  NFSv4 in‐
637                      troduces "minor versioning," where NFS protocol enhance‐
638                      ments can be introduced without bumping the NFS protocol
639                      version number.  Before kernel 2.6.38, the minor version
640                      is  always zero, and this option is not recognized.  Af‐
641                      ter this kernel, specifying "minorversion=1"  enables  a
642                      number of advanced features, such as NFSv4 sessions.
643
644                      Recent  kernels  allow the minor version to be specified
645                      using  the  vers=  option.   For   example,   specifying
646                      vers=4.1  is  the  same  as  specifying vers=4,minorver‐
647                      sion=1.
648
649       port=n         The numeric value of the server's NFS service port.   If
650                      the  server's NFS service is not available on the speci‐
651                      fied port, the mount request fails.
652
653                      If this mount option is not specified,  the  NFS  client
654                      uses  the standard NFS port number of 2049 without first
655                      checking the server's rpcbind service.  This  allows  an
656                      NFS  version 4 client to contact an NFS version 4 server
657                      through a firewall that may block rpcbind requests.
658
659                      If the specified port value is 0, then  the  NFS  client
660                      uses  the  NFS  service  port  number  advertised by the
661                      server's rpcbind service.  The mount  request  fails  if
662                      the  server's  rpcbind  service  is  not  available, the
663                      server's NFS service is not registered with its  rpcbind
664                      service, or the server's NFS service is not available on
665                      the advertised port.
666
667       cto / nocto    Selects whether to use close-to-open cache coherence se‐
668                      mantics  for  NFS  directories  on this mount point.  If
669                      neither cto nor nocto is specified, the  default  is  to
670                      use close-to-open cache coherence semantics for directo‐
671                      ries.
672
673                      File data caching behavior is not affected by  this  op‐
674                      tion.  The DATA AND METADATA COHERENCE section discusses
675                      the behavior of this option in more detail.
676
677       clientaddr=n.n.n.n
678
679       clientaddr=n:n:...:n
680                      Specifies a single IPv4 address (in  dotted-quad  form),
681                      or  a  non-link-local  IPv6 address, that the NFS client
682                      advertises to allow servers to perform NFS  version  4.0
683                      callback  requests against files on this mount point. If
684                      the  server is unable to establish callback  connections
685                      to  clients,  performance  may  degrade,  or accesses to
686                      files may temporarily hang.   Can  specify  a  value  of
687                      IPv4_ANY  (0.0.0.0) or equivalent IPv6 any address which
688                      will signal to the NFS server that this NFS client  does
689                      not want delegations.
690
691                      If  this  option  is not specified, the mount(8) command
692                      attempts to discover an appropriate callback address au‐
693                      tomatically.   The  automatic  discovery  process is not
694                      perfect, however.  In the presence  of  multiple  client
695                      network  interfaces, special routing policies, or atypi‐
696                      cal network topologies, the exact  address  to  use  for
697                      callbacks may be nontrivial to determine.
698
699                      NFS  protocol versions 4.1 and 4.2 use the client-estab‐
700                      lished TCP connection for callback requests, so  do  not
701                      require  the  server to connect to the client.  This op‐
702                      tion is therefore only affect NFS version 4.0 mounts.
703
704       migration / nomigration
705                      Selects whether the client uses an identification string
706                      that  is  compatible with NFSv4 Transparent State Migra‐
707                      tion (TSM).  If the mounted server supports NFSv4 migra‐
708                      tion with TSM, specify the migration option.
709
710                      Some  server  features misbehave in the face of a migra‐
711                      tion-compatible identification string.  The  nomigration
712                      option  retains the use of a traditional client indenti‐
713                      fication string which  is  compatible  with  legacy  NFS
714                      servers.  This is also the behavior if neither option is
715                      specified.  A client's open and lock state cannot be mi‐
716                      grated  transparently  when  it  identifies itself via a
717                      traditional identification string.
718
719                      This mount option has no effect with  NFSv4  minor  ver‐
720                      sions  newer  than zero, which always use TSM-compatible
721                      client identification strings.
722
723       max_connect=n  While nconnect option sets a limit on the number of con‐
724                      nections  that  can be established to a given server IP,
725                      max_connect option allows the user  to  specify  maximum
726                      number  of  connections to different server IPs that be‐
727                      long to the same NFSv4.1+ server (session trunkable con‐
728                      nections)  up  to  a  limit of 16. When client discovers
729                      that it established a client ID to an  already  existing
730                      server,  instead  of  dropping the newly created network
731                      transport, the client will add this  new  connection  to
732                      the list of available transports for that RPC client.
733
734       trunkdiscovery / notrunkdiscovery
735                      When the client discovers a new filesystem on a NFSv4.1+
736                      server, the trunkdiscovery mount option will cause it to
737                      send  a  GETATTR  for the fs_locations attribute.  If is
738                      receives  a  non-zero  length  reply,  it  will  iterate
739                      through  the  response,  and for each server location it
740                      will establish a connection, send  an  EXCHANGE_ID,  and
741                      test  for  session  trunking.  If the trunking test suc‐
742                      ceeds, the connection will be added to the existing  set
743                      of transports for the server, subject to the limit spec‐
744                      ified  by  the  max_connect  option.   The  default   is
745                      notrunkdiscovery.
746

nfs4 FILE SYSTEM TYPE

748       The  nfs4 file system type is an old syntax for specifying NFSv4 usage.
749       It can still be used with all NFSv4-specific and  common  options,  ex‐
750       cepted the nfsvers mount option.
751

MOUNT CONFIGURATION FILE

753       If  the  mount command is configured to do so, all of the mount options
754       described in the  previous  section  can  also  be  configured  in  the
755       /etc/nfsmount.conf file. See nfsmount.conf(5) for details.
756

EXAMPLES

758       mount  option.   To  mount using NFS version 3, use the nfs file system
759       type and specify the nfsvers=3 mount option.  To mount using  NFS  ver‐
760       sion  4,  use either the nfs file system type, with the nfsvers=4 mount
761       option, or the nfs4 file system type.
762
763       The following example from an /etc/fstab file causes the mount  command
764       to negotiate reasonable defaults for NFS behavior.
765
766               server:/export  /mnt  nfs   defaults                      0 0
767
768       This  example shows how to mount using NFS version 4 over TCP with Ker‐
769       beros 5 mutual authentication.
770
771               server:/export  /mnt  nfs4  sec=krb5                      0 0
772
773       This example shows how to mount using NFS version 4 over TCP with  Ker‐
774       beros 5 privacy or data integrity mode.
775
776               server:/export  /mnt  nfs4  sec=krb5p:krb5i               0 0
777
778       This example can be used to mount /usr over NFS.
779
780               server:/export  /usr  nfs   ro,nolock,nocto,actimeo=3600  0 0
781
782       This example shows how to mount an NFS server using a raw IPv6 link-lo‐
783       cal address.
784
785               [fe80::215:c5ff:fb3e:e2b1%eth0]:/export /mnt nfs defaults 0 0
786

TRANSPORT METHODS

788       NFS clients send requests to NFS servers via Remote Procedure Calls, or
789       RPCs.  The RPC client discovers remote service endpoints automatically,
790       handles per-request authentication, adjusts request parameters for dif‐
791       ferent  byte  endianness on client and server, and retransmits requests
792       that may have been lost by the network or  server.   RPC  requests  and
793       replies flow over a network transport.
794
795       In most cases, the mount(8) command, NFS client, and NFS server can au‐
796       tomatically negotiate proper transport and data transfer size  settings
797       for  a  mount  point.  In some cases, however, it pays to specify these
798       settings explicitly using mount options.
799
800       Traditionally, NFS clients  used  the  UDP  transport  exclusively  for
801       transmitting requests to servers.  Though its implementation is simple,
802       NFS over UDP has many limitations that  prevent  smooth  operation  and
803       good  performance  in some common deployment environments.  Even an in‐
804       significant packet loss rate results in the loss of whole NFS requests;
805       as  such, retransmit timeouts are usually in the subsecond range to al‐
806       low clients to recover quickly from dropped requests, but this can  re‐
807       sult in extraneous network traffic and server load.
808
809       However,  UDP  can be quite effective in specialized settings where the
810       networks MTU is large relative to NFSs data transfer size (such as net‐
811       work environments that enable jumbo Ethernet frames).  In such environ‐
812       ments, trimming the rsize and wsize settings so that each NFS  read  or
813       write  request  fits in just a few network frames (or even in  a single
814       frame) is advised.  This reduces the probability that  the  loss  of  a
815       single  MTU-sized  network frame results in the loss of an entire large
816       read or write request.
817
818       TCP is the default transport protocol used for all modern NFS implemen‐
819       tations.  It performs well in almost every conceivable network environ‐
820       ment and provides excellent guarantees against data  corruption  caused
821       by  network  unreliability.   TCP is often a requirement for mounting a
822       server through a network firewall.
823
824       Under normal circumstances, networks drop packets much more  frequently
825       than  NFS  servers  drop  requests.   As such, an aggressive retransmit
826       timeout  setting for NFS over TCP is unnecessary. Typical timeout  set‐
827       tings  for  NFS  over  TCP are between one and ten minutes.  After  the
828       client exhausts its retransmits (the value of  the  retrans  mount  op‐
829       tion), it assumes a network partition has occurred, and attempts to re‐
830       connect to the server on a fresh socket. Since TCP itself makes network
831       data  transfer  reliable,  rsize and wsize can safely be allowed to de‐
832       fault to the largest values supported by both client and server,  inde‐
833       pendent of the network's MTU size.
834
835   Using the mountproto mount option
836       This  section  applies only to NFS version 3 mounts since NFS version 4
837       does not use a separate protocol for mount requests.
838
839       The Linux NFS client can use a different transport  for  contacting  an
840       NFS server's rpcbind service, its mountd service, its Network Lock Man‐
841       ager (NLM) service, and its NFS service.  The exact transports employed
842       by the Linux NFS client for each mount point depends on the settings of
843       the transport mount options, which include proto, mountproto, udp,  and
844       tcp.
845
846       The  client sends Network Status Manager (NSM) notifications via UDP no
847       matter what transport options are specified, but listens for server NSM
848       notifications  on  both  UDP and TCP.  The NFS Access Control List (NF‐
849       SACL) protocol shares the same transport as the main NFS service.
850
851       If no transport options are specified, the Linux NFS client uses UDP to
852       contact the server's mountd service, and TCP to contact its NLM and NFS
853       services by default.
854
855       If the server does not support these transports for these services, the
856       mount(8)  command  attempts  to  discover what the server supports, and
857       then retries the mount request once using  the  discovered  transports.
858       If  the server does not advertise any transport supported by the client
859       or is misconfigured, the mount request fails.  If the bg option  is  in
860       effect,  the  mount command backgrounds itself and continues to attempt
861       the specified mount request.
862
863       When the proto option, the udp option, or the tcp option  is  specified
864       but  the  mountproto  option is not, the specified transport is used to
865       contact both the server's mountd service and for the NLM and  NFS  ser‐
866       vices.
867
868       If the mountproto option is specified but none of the proto, udp or tcp
869       options are specified, then the specified transport  is  used  for  the
870       initial mountd request, but the mount command attempts to discover what
871       the server supports for the NFS protocol, preferring TCP if both trans‐
872       ports are supported.
873
874       If both the mountproto and proto (or udp or tcp) options are specified,
875       then the transport specified by the mountproto option is used  for  the
876       initial mountd request, and the transport specified by the proto option
877       (or the udp or tcp options) is used for NFS, no matter what order these
878       options  appear.   No automatic service discovery is performed if these
879       options are specified.
880
881       If any of the proto, udp, tcp, or mountproto options are specified more
882       than  once on the same mount command line, then the value of the right‐
883       most instance of each of these options takes effect.
884
885   Using NFS over UDP on high-speed links
886       Using NFS over UDP on high-speed links such as Gigabit can cause silent
887       data corruption.
888
889       The  problem  can be triggered at high loads, and is caused by problems
890       in IP fragment reassembly. NFS read and writes typically  transmit  UDP
891       packets of 4 Kilobytes or more, which have to be broken up into several
892       fragments in order to be sent over  the  Ethernet  link,  which  limits
893       packets  to  1500 bytes by default. This process happens at the IP net‐
894       work layer and is called fragmentation.
895
896       In order to identify fragments that belong together, IP assigns a 16bit
897       IP  ID  value  to  each  packet;  fragments generated from the same UDP
898       packet will have the same IP ID.  The  receiving  system  will  collect
899       these  fragments and combine them to form the original UDP packet. This
900       process is called reassembly. The default timeout for packet reassembly
901       is 30 seconds; if the network stack does not receive all fragments of a
902       given packet within this interval, it assumes the  missing  fragment(s)
903       got lost and discards those it already received.
904
905       The  problem  this creates over high-speed links is that it is possible
906       to send more than 65536 packets within 30 seconds. In fact, with  heavy
907       NFS  traffic  one can observe that the IP IDs repeat after about 5 sec‐
908       onds.
909
910       This has serious effects on reassembly: if one fragment gets lost,  an‐
911       other fragment from a different packet but with the same IP ID will ar‐
912       rive within the 30 second timeout, and the network stack  will  combine
913       these  fragments to form a new packet. Most of the time, network layers
914       above IP will detect this mismatched reassembly - in the case  of  UDP,
915       the  UDP  checksum,  which  is a 16 bit checksum over the entire packet
916       payload, will usually not match, and UDP will discard the bad packet.
917
918       However, the UDP checksum is 16 bit only, so there is a chance of 1  in
919       65536  that it will match even if the packet payload is completely ran‐
920       dom (which very often isn't the case). If that is the case, silent data
921       corruption will occur.
922
923       This potential should be taken seriously, at least on Gigabit Ethernet.
924       Network speeds of 100Mbit/s should be considered less problematic,  be‐
925       cause  with  most  traffic  patterns  IP  ID wrap around will take much
926       longer than 30 seconds.
927
928       It is therefore strongly recommended to use NFS over TCP  where  possi‐
929       ble, since TCP does not perform fragmentation.
930
931       If  you absolutely have to use NFS over UDP over Gigabit Ethernet, some
932       steps can be taken to mitigate the problem and reduce  the  probability
933       of corruption:
934
935       Jumbo frames:  Many  Gigabit  network cards are capable of transmitting
936                      frames bigger than the 1500 byte  limit  of  traditional
937                      Ethernet,  typically  9000  bytes. Using jumbo frames of
938                      9000 bytes will allow you to run NFS over UDP at a  page
939                      size  of  8K  without  fragmentation. Of course, this is
940                      only feasible if all  involved  stations  support  jumbo
941                      frames.
942
943                      To  enable  a machine to send jumbo frames on cards that
944                      support it, it is sufficient to configure the  interface
945                      for a MTU value of 9000.
946
947       Lower reassembly timeout:
948                      By  lowering this timeout below the time it takes the IP
949                      ID counter to wrap around, incorrect reassembly of frag‐
950                      ments  can  be prevented as well. To do so, simply write
951                      the  new  timeout  value  (in  seconds)  to   the   file
952                      /proc/sys/net/ipv4/ipfrag_time.
953
954                      A value of 2 seconds will greatly reduce the probability
955                      of IPID clashes on a single Gigabit  link,  while  still
956                      allowing  for  a reasonable timeout when receiving frag‐
957                      mented traffic from distant peers.
958

DATA AND METADATA COHERENCE

960       Some modern cluster file systems provide perfect cache coherence  among
961       their  clients.  Perfect cache coherence among disparate NFS clients is
962       expensive to achieve, especially on wide area networks.  As  such,  NFS
963       settles  for  weaker cache coherence that satisfies the requirements of
964       most file sharing types.
965
966   Close-to-open cache consistency
967       Typically file sharing is completely sequential.  First client A  opens
968       a  file,  writes  something to it, then closes it.  Then client B opens
969       the same file, and reads the changes.
970
971       When an application opens a file stored on an NFS version 3 server, the
972       NFS  client  checks that the file exists on the server and is permitted
973       to the opener by sending a GETATTR or ACCESS request.  The  NFS  client
974       sends  these  requests regardless of the freshness of the file's cached
975       attributes.
976
977       When the application closes the file, the NFS client  writes  back  any
978       pending  changes  to  the  file  so  that  the next opener can view the
979       changes.  This also gives the NFS client an opportunity to report write
980       errors to the application via the return code from close(2).
981
982       The behavior of checking at open time and flushing at close time is re‐
983       ferred to as close-to-open cache consistency, or CTO.  It can  be  dis‐
984       abled for an entire mount point using the nocto mount option.
985
986   Weak cache consistency
987       There  are  still  opportunities  for  a client's data cache to contain
988       stale data.  The NFS version 3 protocol introduced "weak cache  consis‐
989       tency" (also known as WCC) which provides a way of efficiently checking
990       a file's attributes before and after a single request.  This  allows  a
991       client  to  help  identify  changes  that could have been made by other
992       clients.
993
994       When a client is using many concurrent operations that update the  same
995       file  at the same time (for example, during asynchronous write behind),
996       it is still difficult to tell whether it was that client's  updates  or
997       some other client's updates that altered the file.
998
999   Attribute caching
1000       Use  the  noac  mount option to achieve attribute cache coherence among
1001       multiple clients.  Almost every file system operation checks  file  at‐
1002       tribute  information.   The  client keeps this information cached for a
1003       period of time to reduce network and server load.  When noac is in  ef‐
1004       fect,  a  client's  file attribute cache is disabled, so each operation
1005       that needs to check a file's attributes is forced to  go  back  to  the
1006       server.   This  permits a client to see changes to a file very quickly,
1007       at the cost of many extra network operations.
1008
1009       Be careful not to confuse the noac option with "no data caching."   The
1010       noac  mount  option prevents the client from caching file metadata, but
1011       there are still races that may result in data cache incoherence between
1012       client and server.
1013
1014       The  NFS  protocol  is not designed to support true cluster file system
1015       cache coherence without some type of application serialization.  If ab‐
1016       solute  cache  coherence among clients is required, applications should
1017       use file locking. Alternatively, applications can also open their files
1018       with the O_DIRECT flag to disable data caching entirely.
1019
1020   File timestamp maintenance
1021       NFS  servers are responsible for managing file and directory timestamps
1022       (atime, ctime, and mtime).  When a file is accessed or  updated  on  an
1023       NFS  server,  the file's timestamps are updated just like they would be
1024       on a filesystem local to an application.
1025
1026       NFS clients cache file  attributes,  including  timestamps.   A  file's
1027       timestamps are updated on NFS clients when its attributes are retrieved
1028       from the NFS server.  Thus there may be some delay before timestamp up‐
1029       dates on an NFS server appear to applications on NFS clients.
1030
1031       To  comply with the POSIX filesystem standard, the Linux NFS client re‐
1032       lies on NFS servers to keep a file's mtime and ctime  timestamps  prop‐
1033       erly  up  to  date.  It does this by flushing local data changes to the
1034       server before reporting mtime to applications via system calls such  as
1035       stat(2).
1036
1037       The  Linux  client  handles  atime  updates more loosely, however.  NFS
1038       clients maintain good performance by caching data, but that means  that
1039       application  reads,  which  normally update atime, are not reflected to
1040       the server where a file's atime is actually maintained.
1041
1042       Because of this caching behavior, the Linux NFS client does not support
1043       generic atime-related mount options.  See mount(8) for details on these
1044       options.
1045
1046       In particular, the atime/noatime, diratime/nodiratime, relatime/norela‐
1047       time, and strictatime/nostrictatime mount options have no effect on NFS
1048       mounts.
1049
1050       /proc/mounts may report that the relatime mount option is  set  on  NFS
1051       mounts,  but  in fact the atime semantics are always as described here,
1052       and are not like relatime semantics.
1053
1054   Directory entry caching
1055       The Linux NFS client caches the result of all NFS LOOKUP requests.   If
1056       the  requested  directory entry exists on the server, the result is re‐
1057       ferred to as a positive lookup result.  If the requested directory  en‐
1058       try does not exist on the server (that is, the server returned ENOENT),
1059       the result is referred to as negative lookup result.
1060
1061       To detect when directory entries have been  added  or  removed  on  the
1062       server,  the  Linux  NFS  client  watches  a directory's mtime.  If the
1063       client detects a change in a directory's mtime, the  client  drops  all
1064       cached  LOOKUP results for that directory.  Since the directory's mtime
1065       is a cached attribute, it may take some time before a client notices it
1066       has  changed.  See the descriptions of the acdirmin, acdirmax, and noac
1067       mount options for more information about how long a  directory's  mtime
1068       is cached.
1069
1070       Caching directory entries improves the performance of applications that
1071       do not share files with applications on other  clients.   Using  cached
1072       information  about directories can interfere with applications that run
1073       concurrently on multiple clients and need to detect the creation or re‐
1074       moval  of  files quickly, however.  The lookupcache mount option allows
1075       some tuning of directory entry caching behavior.
1076
1077       Before kernel release 2.6.28, the Linux NFS client tracked  only  posi‐
1078       tive  lookup results.  This permitted applications to detect new direc‐
1079       tory entries created by other clients  quickly  while  still  providing
1080       some of the performance benefits of caching.  If an application depends
1081       on the previous lookup caching behavior of the Linux  NFS  client,  you
1082       can use lookupcache=positive.
1083
1084       If  the client ignores its cache and validates every application lookup
1085       request with the server, that client can immediately detect when a  new
1086       directory  entry  has been either created or removed by another client.
1087       You can specify this behavior using lookupcache=none.   The  extra  NFS
1088       requests  needed if the client does not cache directory entries can ex‐
1089       act a performance penalty.  Disabling lookup caching should  result  in
1090       less of a performance penalty than using noac, and has no effect on how
1091       the NFS client caches the attributes of files.
1092
1093   The sync mount option
1094       The NFS client treats the sync mount option differently than some other
1095       file  systems  (refer to mount(8) for a description of the generic sync
1096       and async mount options).  If neither sync nor async is  specified  (or
1097       if the async option is specified), the NFS client delays sending appli‐
1098       cation writes to the server until any of these events occur:
1099
1100              Memory pressure forces reclamation of system memory resources.
1101
1102              An  application  flushes  file  data  explicitly  with  sync(2),
1103              msync(2), or fsync(3).
1104
1105              An application closes a file with close(2).
1106
1107              The file is locked/unlocked via fcntl(2).
1108
1109       In other words, under normal circumstances, data written by an applica‐
1110       tion may not immediately appear on the server that hosts the file.
1111
1112       If the sync option is specified on a mount point, any system call  that
1113       writes data to files on that mount point causes that data to be flushed
1114       to the server before the system call returns  control  to  user  space.
1115       This provides greater data cache coherence among clients, but at a sig‐
1116       nificant performance cost.
1117
1118       Applications can use the O_SYNC open flag to force  application  writes
1119       to  individual files to go to the server immediately without the use of
1120       the sync mount option.
1121
1122   Using file locks with NFS
1123       The Network Lock Manager protocol is a separate sideband protocol  used
1124       to  manage file locks in NFS version 3.  To support lock recovery after
1125       a client or server reboot, a second sideband protocol -- known  as  the
1126       Network Status Manager protocol -- is also required.  In NFS version 4,
1127       file locking is supported directly in the main NFS  protocol,  and  the
1128       NLM and NSM sideband protocols are not used.
1129
1130       In  most  cases, NLM and NSM services are started automatically, and no
1131       extra configuration is required.  Configure all NFS clients with fully-
1132       qualified  domain  names to ensure that NFS servers can find clients to
1133       notify them of server reboots.
1134
1135       NLM supports advisory file locks only.  To lock NFS files, use fcntl(2)
1136       with  the  F_GETLK  and F_SETLK commands.  The NFS client converts file
1137       locks obtained via flock(2) to advisory locks.
1138
1139       When mounting servers that do not support the  NLM  protocol,  or  when
1140       mounting  an  NFS server through a firewall that blocks the NLM service
1141       port, specify the nolock mount option. NLM  locking  must  be  disabled
1142       with  the  nolock option when using NFS to mount /var because /var con‐
1143       tains files used by the NLM implementation on Linux.
1144
1145       Specifying the nolock option may also be advised to improve the perfor‐
1146       mance  of  a  proprietary application which runs on a single client and
1147       uses file locks extensively.
1148
1149   NFS version 4 caching features
1150       The data and metadata caching behavior of NFS version 4 clients is sim‐
1151       ilar to that of earlier versions.  However, NFS version 4 adds two fea‐
1152       tures that improve cache behavior: change attributes and  file  delega‐
1153       tion.
1154
1155       The  change  attribute is a new part of NFS file and directory metadata
1156       which tracks data changes.  It replaces the use of a  file's  modifica‐
1157       tion  and  change time stamps as a way for clients to validate the con‐
1158       tent of their caches.  Change attributes are independent  of  the  time
1159       stamp resolution on either the server or client, however.
1160
1161       A  file  delegation  is  a contract between an NFS version 4 client and
1162       server that allows the client to treat a  file  temporarily  as  if  no
1163       other client is accessing it.  The server promises to notify the client
1164       (via a callback request) if another  client  attempts  to  access  that
1165       file.  Once a file has been delegated to a client, the client can cache
1166       that file's data  and  metadata  aggressively  without  contacting  the
1167       server.
1168
1169       File  delegations  come in two flavors: read and write.  A read delega‐
1170       tion means that the server notifies the client about any other  clients
1171       that  want  to  write  to  the file.  A write delegation means that the
1172       client gets notified about either read or write accessors.
1173
1174       Servers grant file delegations when a file is opened,  and  can  recall
1175       delegations  at  any  time when another client wants access to the file
1176       that conflicts with any delegations already  granted.   Delegations  on
1177       directories are not supported.
1178
1179       In  order to support delegation callback, the server checks the network
1180       return path to the client during the client's initial contact with  the
1181       server.   If  contact with the client cannot be established, the server
1182       simply does not grant any delegations to that client.
1183

SECURITY CONSIDERATIONS

1185       NFS servers control access to file data, but they depend on  their  RPC
1186       implementation  to provide authentication of NFS requests.  Traditional
1187       NFS access control mimics the standard mode bit access control provided
1188       in local file systems.  Traditional RPC authentication uses a number to
1189       represent each user (usually the user's own uid), a number to represent
1190       the  user's  group  (the  user's  gid), and a set of up to 16 auxiliary
1191       group numbers to represent other groups of which the user may be a mem‐
1192       ber.
1193
1194       Typically,  file  data  and user ID values appear unencrypted (i.e. "in
1195       the clear") on the network.  Moreover, NFS versions 2 and 3  use  sepa‐
1196       rate  sideband protocols for mounting, locking and unlocking files, and
1197       reporting system status of clients and servers.  These auxiliary proto‐
1198       cols use no authentication.
1199
1200       In  addition  to  combining  these sideband protocols with the main NFS
1201       protocol, NFS version 4 introduces more advanced forms of  access  con‐
1202       trol,  authentication, and in-transit data protection.  The NFS version
1203       4 specification mandates support for strong authentication and security
1204       flavors  that  provide  per-RPC integrity checking and encryption.  Be‐
1205       cause NFS version 4 combines the function  of  the  sideband  protocols
1206       into  the main NFS protocol, the new security features apply to all NFS
1207       version 4 operations including  mounting,  file  locking,  and  so  on.
1208       RPCGSS  authentication  can also be used with NFS versions 2 and 3, but
1209       it does not protect their sideband protocols.
1210
1211       The sec mount option specifies the security flavor used for  operations
1212       on  behalf  of users on that NFS mount point.  Specifying sec=krb5 pro‐
1213       vides cryptographic proof of a user's identity  in  each  RPC  request.
1214       This  provides  strong  verification of the identity of users accessing
1215       data on the server.  Note that additional configuration besides  adding
1216       this  mount  option  is  required in order to enable Kerberos security.
1217       Refer to the rpc.gssd(8) man page for details.
1218
1219       Two additional flavors of Kerberos security are  supported:  krb5i  and
1220       krb5p.   The  krb5i security flavor provides a cryptographically strong
1221       guarantee that the data in each RPC request has not been tampered with.
1222       The  krb5p  security  flavor encrypts every RPC request to prevent data
1223       exposure during network transit; however, expect some  performance  im‐
1224       pact  when using integrity checking or encryption.  Similar support for
1225       other forms of cryptographic security is also available.
1226
1227   NFS version 4 filesystem crossing
1228       The NFS version 4 protocol allows a client to renegotiate the  security
1229       flavor  when  the  client  crosses into a new filesystem on the server.
1230       The newly negotiated flavor effects only accesses of the  new  filesys‐
1231       tem.
1232
1233       Such negotiation typically occurs when a client crosses from a server's
1234       pseudo-fs into one of the server's exported physical filesystems, which
1235       often have more restrictive security settings than the pseudo-fs.
1236
1237   NFS version 4 Leases
1238       In NFS version 4, a lease is a period during which a server irrevocably
1239       grants a client file locks.  Once the lease expires, the server may re‐
1240       voke  those  locks.  Clients periodically renew their leases to prevent
1241       lock revocation.
1242
1243       After an NFS version 4 server reboots, each  client  tells  the  server
1244       about  existing  file open and lock state under its lease before opera‐
1245       tion can continue.  If a client reboots, the server frees all open  and
1246       lock state associated with that client's lease.
1247
1248       When  establishing a lease, therefore, a client must identify itself to
1249       a server.  Each client presents an arbitrary string to distinguish  it‐
1250       self  from  other clients.  The client administrator can supplement the
1251       default identity string using the nfs4.nfs4_unique_id module  parameter
1252       to avoid collisions with other client identity strings.
1253
1254       A  client  also uses a unique security flavor and principal when it es‐
1255       tablishes its lease.  If two clients present the same identity  string,
1256       a  server  can  use client principals to distinguish between them, thus
1257       securely preventing one client from interfering with the other's lease.
1258
1259       The Linux NFS client establishes  one  lease  on  each  NFS  version  4
1260       server.   Lease  management  operations, such as lease renewal, are not
1261       done on behalf of a particular file, lock, user, or mount point, but on
1262       behalf  of the client that owns that lease.  A client uses a consistent
1263       identity string, security flavor, and principal across  client  reboots
1264       to ensure that the server can promptly reap expired lease state.
1265
1266       When  Kerberos  is  configured  on a Linux NFS client (i.e., there is a
1267       /etc/krb5.keytab on that client), the client attempts to use a Kerberos
1268       security flavor for its lease management operations.  Kerberos provides
1269       secure authentication of each client.  By default, the client uses  the
1270       host/  or  nfs/ service principal in its /etc/krb5.keytab for this pur‐
1271       pose, as described in rpc.gssd(8).
1272
1273       If the client has Kerberos configured, but the server does not,  or  if
1274       the  client does not have a keytab or the requisite service principals,
1275       the client uses AUTH_SYS and UID 0 for lease management.
1276
1277   Using non-privileged source ports
1278       NFS clients usually communicate with NFS servers via  network  sockets.
1279       Each end of a socket is assigned a port value, which is simply a number
1280       between 1 and 65535 that distinguishes socket endpoints at the same  IP
1281       address.   A  socket  is  uniquely defined by a tuple that includes the
1282       transport protocol (TCP or UDP) and the port values and IP addresses of
1283       both endpoints.
1284
1285       The  NFS  client  can choose any source port value for its sockets, but
1286       usually chooses a privileged port.  A privileged port is a  port  value
1287       less  than  1024.   Only  a  process  with root privileges may create a
1288       socket with a privileged source port.
1289
1290       The exact range of privileged source ports that can be chosen is set by
1291       a pair of sysctls to avoid choosing a well-known port, such as the port
1292       used by ssh.  This means the number of source ports available  for  the
1293       NFS  client, and therefore the number of socket connections that can be
1294       used at the same time, is practically limited to only a few hundred.
1295
1296       As described above, the traditional default NFS authentication  scheme,
1297       known as AUTH_SYS, relies on sending local UID and GID numbers to iden‐
1298       tify users making NFS requests.  An NFS server assumes that if  a  con‐
1299       nection  comes  from  a privileged port, the UID and GID numbers in the
1300       NFS requests on this connection have been verified by the client's ker‐
1301       nel  or  some  other local authority.  This is an easy system to spoof,
1302       but on a trusted physical network between trusted hosts, it is entirely
1303       adequate.
1304
1305       Roughly  speaking,  one  socket is used for each NFS mount point.  If a
1306       client could use non-privileged source ports as  well,  the  number  of
1307       sockets  allowed,  and  thus  the  maximum  number  of concurrent mount
1308       points, would be much larger.
1309
1310       Using non-privileged source ports may compromise server security  some‐
1311       what, since any user on AUTH_SYS mount points can now pretend to be any
1312       other when making NFS requests.  Thus NFS servers do not  support  this
1313       by default.  They explicitly allow it usually via an export option.
1314
1315       To  retain  good security while allowing as many mount points as possi‐
1316       ble, it is best to allow non-privileged client connections only if  the
1317       server and client both require strong authentication, such as Kerberos.
1318
1319   Mounting through a firewall
1320       A  firewall  may reside between an NFS client and server, or the client
1321       or server may block some of its own ports via IP filter rules.   It  is
1322       still  possible  to mount an NFS server through a firewall, though some
1323       of the mount(8) command's automatic service endpoint  discovery  mecha‐
1324       nisms  may not work; this requires you to provide specific endpoint de‐
1325       tails via NFS mount options.
1326
1327       NFS servers normally run a portmapper or rpcbind  daemon  to  advertise
1328       their  service  endpoints to clients. Clients use the rpcbind daemon to
1329       determine:
1330
1331              What network port each RPC-based service is using
1332
1333              What transport protocols each RPC-based service supports
1334
1335       The rpcbind daemon uses a well-known port number (111) to help  clients
1336       find  a service endpoint.  Although NFS often uses a standard port num‐
1337       ber (2049), auxiliary services such as the NLM service can  choose  any
1338       unused port number at random.
1339
1340       Common  firewall  configurations block the well-known rpcbind port.  In
1341       the absense of an rpcbind service, the server administrator  fixes  the
1342       port  number of NFS-related services so that the firewall can allow ac‐
1343       cess to specific NFS service ports.  Client administrators then specify
1344       the  port  number  for  the  mountd  service via the mount(8) command's
1345       mountport option.  It may also be necessary to enforce the use  of  TCP
1346       or UDP if the firewall blocks one of those transports.
1347
1348   NFS Access Control Lists
1349       Solaris allows NFS version 3 clients direct access to POSIX Access Con‐
1350       trol Lists stored in its local file systems.  This proprietary sideband
1351       protocol,  known  as  NFSACL,  provides richer access control than mode
1352       bits.  Linux implements this protocol for compatibility  with  the  So‐
1353       laris  NFS implementation.  The NFSACL protocol never became a standard
1354       part of the NFS version 3 specification, however.
1355
1356       The NFS version 4 specification mandates a new version of  Access  Con‐
1357       trol Lists that are semantically richer than POSIX ACLs.  NFS version 4
1358       ACLs are not fully compatible with POSIX ACLs; as such,  some  transla‐
1359       tion  between  the  two  is required in an environment that mixes POSIX
1360       ACLs and NFS version 4.
1361

THE REMOUNT OPTION

1363       Generic mount options such as rw and sync can be modified on NFS  mount
1364       points  using the remount option.  See mount(8) for more information on
1365       generic mount options.
1366
1367       With few exceptions, NFS-specific options are not able to  be  modified
1368       during  a  remount.   The underlying transport or NFS version cannot be
1369       changed by a remount, for example.
1370
1371       Performing a remount on an NFS file system mounted with the noac option
1372       may  have unintended consequences.  The noac option is a combination of
1373       the generic option sync, and the NFS-specific option actimeo=0.
1374
1375   Unmounting after a remount
1376       For mount points that use NFS versions 2 or 3, the NFS  umount  subcom‐
1377       mand  depends on knowing the original set of mount options used to per‐
1378       form the MNT operation.  These options are stored on disk  by  the  NFS
1379       mount subcommand, and can be erased by a remount.
1380
1381       To ensure that the saved mount options are not erased during a remount,
1382       specify either the local mount directory, or the  server  hostname  and
1383       export pathname, but not both, during a remount.  For example,
1384
1385               mount -o remount,ro /mnt
1386
1387       merges the mount option ro with the mount options already saved on disk
1388       for the NFS server mounted at /mnt.
1389

FILES

1391       /etc/fstab     file system table
1392
1393       /etc/nfsmount.conf
1394                      Configuration file for NFS mounts
1395

NOTES

1397       Before 2.4.7, the Linux NFS client did not support NFS over TCP.
1398
1399       Before 2.4.20, the Linux NFS  client  used  a  heuristic  to  determine
1400       whether cached file data was still valid rather than using the standard
1401       close-to-open cache coherency method described above.
1402
1403       Starting with 2.4.22, the Linux NFS client employs a Van Jacobsen-based
1404       RTT  estimator  to  determine  retransmit timeout values when using NFS
1405       over UDP.
1406
1407       Before 2.6.0, the Linux NFS client did not support NFS version 4.
1408
1409       Before 2.6.8, the Linux NFS client  used  only  synchronous  reads  and
1410       writes when the rsize and wsize settings were smaller than the system's
1411       page size.
1412
1413       The Linux client's support for protocol versions depend on whether  the
1414       kernel  was  built  with  options  CONFIG_NFS_V2,  CONFIG_NFS_V3,  CON‐
1415       FIG_NFS_V4, CONFIG_NFS_V4_1, and CONFIG_NFS_V4_2.
1416

SEE ALSO

1418       fstab(5), mount(8), umount(8), mount.nfs(5), umount.nfs(5), exports(5),
1419       nfsmount.conf(5),   netconfig(5),   ipv6(7),   nfsd(8),   sm-notify(8),
1420       rpc.statd(8), rpc.idmapd(8), rpc.gssd(8), rpc.svcgssd(8), kerberos(1)
1421
1422       RFC 768 for the UDP specification.
1423       RFC 793 for the TCP specification.
1424       RFC 1813 for the NFS version 3 specification.
1425       RFC 1832 for the XDR specification.
1426       RFC 1833 for the RPC bind specification.
1427       RFC 2203 for the RPCSEC GSS API protocol specification.
1428       RFC 7530 for the NFS version 4.0 specification.
1429       RFC 5661 for the NFS version 4.1 specification.
1430       RFC 7862 for the NFS version 4.2 specification.
1431
1432
1433
1434                                9 October 2012                          NFS(5)
Impressum