1NFS(5)                        File Formats Manual                       NFS(5)
2
3
4

NAME

6       nfs - fstab format and options for the nfs file systems
7

SYNOPSIS

9       /etc/fstab
10

DESCRIPTION

12       NFS  is  an  Internet  Standard protocol created by Sun Microsystems in
13       1984. NFS was developed to allow file sharing between systems  residing
14       on  a local area network.  Depending on kernel configuration, the Linux
15       NFS client may support NFS versions 2, 3, 4.0, 4.1, or 4.2.
16
17       The mount(8) command attaches a file system to the system's name  space
18       hierarchy  at  a  given mount point.  The /etc/fstab file describes how
19       mount(8) should assemble a system's file name  hierarchy  from  various
20       independent  file  systems  (including  file  systems  exported  by NFS
21       servers).  Each line in the /etc/fstab file  describes  a  single  file
22       system,  its  mount  point, and a set of default mount options for that
23       mount point.
24
25       For NFS file system mounts, a line in the /etc/fstab file specifies the
26       server  name,  the path name of the exported server directory to mount,
27       the local directory that is the mount point, the type  of  file  system
28       that is being mounted, and a list of mount options that control the way
29       the filesystem is mounted and how the NFS client behaves when accessing
30       files on this mount point.  The fifth and sixth fields on each line are
31       not used by NFS, thus conventionally each contain the digit  zero.  For
32       example:
33
34               server:path   /mountpoint   fstype   option,option,...   0 0
35
36       The  server's  hostname  and  export pathname are separated by a colon,
37       while the mount options are separated by commas. The  remaining  fields
38       are separated by blanks or tabs.
39
40       The server's hostname can be an unqualified hostname, a fully qualified
41       domain name, a dotted quad IPv4 address, or an IPv6 address enclosed in
42       square  brackets.   Link-local  and  site-local  IPv6 addresses must be
43       accompanied by an interface identifier.  See  ipv6(7)  for  details  on
44       specifying raw IPv6 addresses.
45
46       The  fstype  field  contains  "nfs".   Use  of  the  "nfs4"  fstype  in
47       /etc/fstab is deprecated.
48

MOUNT OPTIONS

50       Refer to mount(8) for a description of generic mount options  available
51       for  all file systems. If you do not need to specify any mount options,
52       use the generic option defaults in /etc/fstab.
53
54   Options supported by all versions
55       These options are valid to use with any NFS version.
56
57       nfsvers=n      The NFS protocol version  number  used  to  contact  the
58                      server's  NFS  service.   If the server does not support
59                      the requested version, the mount request fails.  If this
60                      option  is  not  specified, the client tries version 4.2
61                      first, then negotiates down until  it  finds  a  version
62                      supported by the server.
63
64       vers=n         This option is an alternative to the nfsvers option.  It
65                      is included for compatibility with other operating  sys‐
66                      tems
67
68       soft / hard    Determines the recovery behavior of the NFS client after
69                      an NFS request times out.  If neither option  is  speci‐
70                      fied  (or if the hard option is specified), NFS requests
71                      are retried indefinitely.  If the soft option is  speci‐
72                      fied,  then  the  NFS  client fails an NFS request after
73                      retrans retransmissions have been sent, causing the  NFS
74                      client to return an error to the calling application.
75
76                      NB:  A  so-called  "soft"  timeout can cause silent data
77                      corruption in certain  cases.  As  such,  use  the  soft
78                      option only when client responsiveness is more important
79                      than data integrity.  Using NFS over TCP  or  increasing
80                      the value of the retrans option may mitigate some of the
81                      risks of using the soft option.
82
83       intr / nointr  This option is provided for backward compatibility.   It
84                      is ignored after kernel 2.6.25.
85
86       timeo=n        The  time  in  deciseconds  (tenths of a second) the NFS
87                      client waits for a response before  it  retries  an  NFS
88                      request.
89
90                      For NFS over TCP the default timeo value is 600 (60 sec‐
91                      onds).  The NFS client performs  linear  backoff:  After
92                      each retransmission the timeout is increased by timeo up
93                      to the maximum of 600 seconds.
94
95                      However, for NFS over UDP, the client uses  an  adaptive
96                      algorithm  to  estimate an appropriate timeout value for
97                      frequently used request types (such as  READ  and  WRITE
98                      requests),  but  uses the timeo setting for infrequently
99                      used request types (such as FSINFO  requests).   If  the
100                      timeo option is not specified, infrequently used request
101                      types  are  retried  after  1.1  seconds.   After   each
102                      retransmission,  the  NFS client doubles the timeout for
103                      that request, up to a maximum timeout length of 60  sec‐
104                      onds.
105
106       retrans=n      The  number  of  times  the NFS client retries a request
107                      before it  attempts  further  recovery  action.  If  the
108                      retrans  option  is  not specified, the NFS client tries
109                      each UDP request three times and each TCP request twice.
110
111                      The NFS client generates a "server not responding"  mes‐
112                      sage after retrans retries, then attempts further recov‐
113                      ery (depending on whether the hard mount  option  is  in
114                      effect).
115
116       rsize=n        The maximum number of bytes in each network READ request
117                      that the NFS client can receive when reading data from a
118                      file  on an NFS server.  The actual data payload size of
119                      each NFS READ request is equal to or  smaller  than  the
120                      rsize setting. The largest read payload supported by the
121                      Linux NFS client is 1,048,576 bytes (one megabyte).
122
123                      The rsize value is a positive integral multiple of 1024.
124                      Specified rsize values lower than 1024 are replaced with
125                      4096; values  larger  than  1048576  are  replaced  with
126                      1048576.  If  a  specified value is within the supported
127                      range but not a multiple of 1024, it is rounded down  to
128                      the nearest multiple of 1024.
129
130                      If  an rsize value is not specified, or if the specified
131                      rsize value is  larger  than  the  maximum  that  either
132                      client  or  server  can  support,  the client and server
133                      negotiate the largest rsize value  that  they  can  both
134                      support.
135
136                      The rsize mount option as specified on the mount(8) com‐
137                      mand line appears in the /etc/mtab  file.  However,  the
138                      effective  rsize  value  negotiated  by  the  client and
139                      server is reported in the /proc/mounts file.
140
141       wsize=n        The maximum number of bytes per  network  WRITE  request
142                      that the NFS client can send when writing data to a file
143                      on an NFS server. The actual data payload size  of  each
144                      NFS  WRITE request is equal to or smaller than the wsize
145                      setting. The largest  write  payload  supported  by  the
146                      Linux NFS client is 1,048,576 bytes (one megabyte).
147
148                      Similar  to  rsize , the wsize value is a positive inte‐
149                      gral multiple of 1024.   Specified  wsize  values  lower
150                      than  1024  are  replaced  with 4096; values larger than
151                      1048576 are replaced with 1048576. If a specified  value
152                      is  within  the  supported  range  but not a multiple of
153                      1024, it is rounded down  to  the  nearest  multiple  of
154                      1024.
155
156                      If  a  wsize value is not specified, or if the specified
157                      wsize value is  larger  than  the  maximum  that  either
158                      client  or  server  can  support,  the client and server
159                      negotiate the largest wsize value  that  they  can  both
160                      support.
161
162                      The wsize mount option as specified on the mount(8) com‐
163                      mand line appears in the /etc/mtab  file.  However,  the
164                      effective  wsize  value  negotiated  by  the  client and
165                      server is reported in the /proc/mounts file.
166
167       ac / noac      Selects whether the client may cache file attributes. If
168                      neither option is specified (or if ac is specified), the
169                      client caches file attributes.
170
171                      To  improve  performance,   NFS   clients   cache   file
172                      attributes.  Every few seconds, an NFS client checks the
173                      server's version of each file's attributes for  updates.
174                      Changes  that  occur on the server in those small inter‐
175                      vals remain  undetected  until  the  client  checks  the
176                      server  again.  The  noac  option  prevents clients from
177                      caching file attributes so that  applications  can  more
178                      quickly detect file changes on the server.
179
180                      In  addition  to preventing the client from caching file
181                      attributes, the noac option forces application writes to
182                      become  synchronous  so  that  local  changes  to a file
183                      become visible on the  server  immediately.   That  way,
184                      other clients can quickly detect recent writes when they
185                      check the file's attributes.
186
187                      Using the noac option provides greater  cache  coherence
188                      among  NFS  clients  accessing  the  same  files, but it
189                      extracts a significant performance  penalty.   As  such,
190                      judicious  use  of  file  locking is encouraged instead.
191                      The DATA  AND  METADATA  COHERENCE  section  contains  a
192                      detailed discussion of these trade-offs.
193
194       acregmin=n     The minimum time (in seconds) that the NFS client caches
195                      attributes of a regular file before  it  requests  fresh
196                      attribute  information from a server.  If this option is
197                      not specified, the NFS client uses a  3-second  minimum.
198                      See  the  DATA AND METADATA COHERENCE section for a full
199                      discussion of attribute caching.
200
201       acregmax=n     The maximum time (in seconds) that the NFS client caches
202                      attributes  of  a  regular file before it requests fresh
203                      attribute information from a server.  If this option  is
204                      not  specified, the NFS client uses a 60-second maximum.
205                      See the DATA AND METADATA COHERENCE section for  a  full
206                      discussion of attribute caching.
207
208       acdirmin=n     The minimum time (in seconds) that the NFS client caches
209                      attributes of  a  directory  before  it  requests  fresh
210                      attribute  information from a server.  If this option is
211                      not specified, the NFS client uses a 30-second  minimum.
212                      See  the  DATA AND METADATA COHERENCE section for a full
213                      discussion of attribute caching.
214
215       acdirmax=n     The maximum time (in seconds) that the NFS client caches
216                      attributes  of  a  directory  before  it  requests fresh
217                      attribute information from a server.  If this option  is
218                      not  specified, the NFS client uses a 60-second maximum.
219                      See the DATA AND METADATA COHERENCE section for  a  full
220                      discussion of attribute caching.
221
222       actimeo=n      Using  actimeo sets all of acregmin, acregmax, acdirmin,
223                      and acdirmax to the same value.  If this option  is  not
224                      specified,  the NFS client uses the defaults for each of
225                      these options listed above.
226
227       bg / fg        Determines  how  the  mount(8)  command  behaves  if  an
228                      attempt  to mount an export fails.  The fg option causes
229                      mount(8) to exit with an error status if any part of the
230                      mount  request  times  out  or  fails outright.  This is
231                      called a "foreground" mount, and is the default behavior
232                      if neither the fg nor bg mount option is specified.
233
234                      If  the  bg  option  is  specified, a timeout or failure
235                      causes the mount(8) command to fork a child  which  con‐
236                      tinues to attempt to mount the export.  The parent imme‐
237                      diately returns with a zero exit code.  This is known as
238                      a "background" mount.
239
240                      If  the  local  mount  point  directory  is missing, the
241                      mount(8) command acts as if the mount request timed out.
242                      This  permits  nested NFS mounts specified in /etc/fstab
243                      to proceed in any order  during  system  initialization,
244                      even  if some NFS servers are not yet available.  Alter‐
245                      natively these issues can be addressed  using  an  auto‐
246                      mounter (refer to automount(8) for details).
247
248       rdirplus / nordirplus
249                      Selects   whether  to  use  NFS  v3  or  v4  READDIRPLUS
250                      requests.  If this option  is  not  specified,  the  NFS
251                      client  uses READDIRPLUS requests on NFS v3 or v4 mounts
252                      to read small directories.   Some  applications  perform
253                      better  if the client uses only READDIR requests for all
254                      directories.
255
256       retry=n        The number of minutes that the mount(8) command  retries
257                      an  NFS  mount operation in the foreground or background
258                      before giving up.  If this option is not specified,  the
259                      default  value  for  foreground mounts is 2 minutes, and
260                      the default value for background mounts is 10000 minutes
261                      (80  minutes  shy  of  one week).  If a value of zero is
262                      specified, the mount(8) command exits immediately  after
263                      the first failure.
264
265                      Note  that  this  only affects how many retries are made
266                      and doesn't affect the delay caused by each retry.   For
267                      UDP  each  retry  takes the time determined by the timeo
268                      and retrans options, which by default will  be  about  7
269                      seconds.   For  TCP the default is 3 minutes, but system
270                      TCP connection timeouts will sometimes limit the timeout
271                      of each retransmission to around 2 minutes.
272
273       sec=flavors    A  colon-separated  list of one or more security flavors
274                      to use for accessing files on the mounted export. If the
275                      server  does not support any of these flavors, the mount
276                      operation fails.  If sec= is not specified,  the  client
277                      attempts  to find a security flavor that both the client
278                      and the server supports.  Valid flavors are  none,  sys,
279                      krb5, krb5i, and krb5p.  Refer to the SECURITY CONSIDER‐
280                      ATIONS section for details.
281
282       sharecache / nosharecache
283                      Determines how the client's  data  cache  and  attribute
284                      cache are shared when mounting the same export more than
285                      once concurrently.  Using the same cache reduces  memory
286                      requirements  on  the client and presents identical file
287                      contents to applications when the same  remote  file  is
288                      accessed via different mount points.
289
290                      If  neither  option  is  specified, or if the sharecache
291                      option is specified, then a single cache is used for all
292                      mount  points  that  access  the  same  export.   If the
293                      nosharecache option is specified, then that mount  point
294                      gets  a unique cache.  Note that when data and attribute
295                      caches are shared, the  mount  options  from  the  first
296                      mount point take effect for subsequent concurrent mounts
297                      of the same export.
298
299                      As of kernel 2.6.18, the behavior specified by  noshare‐
300                      cache  is  legacy caching behavior. This is considered a
301                      data risk since multiple cached copies of the same  file
302                      on  the  same  client can become out of sync following a
303                      local update of one of the copies.
304
305       resvport / noresvport
306                      Specifies whether the NFS client should use a privileged
307                      source  port  when  communicating with an NFS server for
308                      this mount point.  If this option is not  specified,  or
309                      the  resvport option is specified, the NFS client uses a
310                      privileged source port.  If  the  noresvport  option  is
311                      specified,  the  NFS client uses a non-privileged source
312                      port.  This option is supported in  kernels  2.6.28  and
313                      later.
314
315                      Using  non-privileged  source  ports  helps increase the
316                      maximum number of NFS mount points allowed on a  client,
317                      but  NFS  servers must be configured to allow clients to
318                      connect via non-privileged source ports.
319
320                      Refer to the SECURITY CONSIDERATIONS section for  impor‐
321                      tant details.
322
323       lookupcache=mode
324                      Specifies  how the kernel manages its cache of directory
325                      entries for a given mount point.  mode  can  be  one  of
326                      all,  none,  pos, or positive.  This option is supported
327                      in kernels 2.6.28 and later.
328
329                      The Linux NFS client caches the result of all NFS LOOKUP
330                      requests.   If  the  requested directory entry exists on
331                      the server, the result is referred to as  positive.   If
332                      the  requested  directory  entry  does  not exist on the
333                      server, the result is referred to as negative.
334
335                      If this option is not specified, or if all is specified,
336                      the client assumes both types of directory cache entries
337                      are  valid  until  their   parent   directory's   cached
338                      attributes expire.
339
340                      If pos or positive is specified, the client assumes pos‐
341                      itive entries are valid until their  parent  directory's
342                      cached  attributes  expire, but always revalidates nega‐
343                      tive entires before an application can use them.
344
345                      If none is specified, the client revalidates both  types
346                      of directory cache entries before an application can use
347                      them.  This permits quick detection of files  that  were
348                      created  or  removed  by  other  clients, but can impact
349                      application and server performance.
350
351                      The DATA  AND  METADATA  COHERENCE  section  contains  a
352                      detailed discussion of these trade-offs.
353
354       fsc / nofsc    Enable/Disables  the  cache of (read-only) data pages to
355                      the  local  disk  using  the  FS-Cache   facility.   See
356                      cachefilesd(8)       and      <kernel_soruce>/Documenta‐
357                      tion/filesystems/caching for detail on how to  configure
358                      the FS-Cache facility.  Default value is nofsc.
359
360   Options for NFS versions 2 and 3 only
361       Use  these options, along with the options in the above subsection, for
362       NFS versions 2 and 3 only.
363
364       proto=netid    The netid determines the transport that is used to  com‐
365                      municate  with  the  NFS  server.  Available options are
366                      udp, udp6, tcp, tcp6, and rdma.  Those which  end  in  6
367                      use IPv6 addresses and are only available if support for
368                      TI-RPC is built in. Others use IPv4 addresses.
369
370                      Each transport protocol uses different  default  retrans
371                      and  timeo  settings.  Refer to the description of these
372                      two mount options for details.
373
374                      In addition to controlling how the NFS client  transmits
375                      requests  to the server, this mount option also controls
376                      how the mount(8) command communicates with the  server's
377                      rpcbind  and  mountd  services.  Specifying a netid that
378                      uses TCP forces all traffic from  the  mount(8)  command
379                      and  the NFS client to use TCP.  Specifying a netid that
380                      uses UDP forces all traffic types to use UDP.
381
382                      Before using NFS over UDP, refer to the TRANSPORT  METH‐
383                      ODS section.
384
385                      If the proto mount option is not specified, the mount(8)
386                      command discovers which protocols  the  server  supports
387                      and  chooses  an appropriate transport for each service.
388                      Refer to the TRANSPORT METHODS section for more details.
389
390       udp            The  udp  option  is  an   alternative   to   specifying
391                      proto=udp.   It is included for compatibility with other
392                      operating systems.
393
394                      Before using NFS over UDP, refer to the TRANSPORT  METH‐
395                      ODS section.
396
397       tcp            The   tcp   option   is  an  alternative  to  specifying
398                      proto=tcp.  It is included for compatibility with  other
399                      operating systems.
400
401       rdma           The   rdma   option  is  an  alternative  to  specifying
402                      proto=rdma.
403
404       port=n         The numeric value of the server's NFS service port.   If
405                      the  server's NFS service is not available on the speci‐
406                      fied port, the mount request fails.
407
408                      If this option is not specified,  or  if  the  specified
409                      port  value  is 0, then the NFS client uses the NFS ser‐
410                      vice port number advertised by the server's rpcbind ser‐
411                      vice.   The  mount request fails if the server's rpcbind
412                      service is not available, the server's  NFS  service  is
413                      not registered with its rpcbind service, or the server's
414                      NFS service is not available on the advertised port.
415
416       mountport=n    The numeric value of the server's mountd port.   If  the
417                      server's  mountd  service is not available on the speci‐
418                      fied port, the mount request fails.
419
420                      If this option is not specified,  or  if  the  specified
421                      port  value  is  0,  then  the mount(8) command uses the
422                      mountd service port number advertised  by  the  server's
423                      rpcbind   service.   The  mount  request  fails  if  the
424                      server's rpcbind service is not available, the  server's
425                      mountd  service  is not registered with its rpcbind ser‐
426                      vice, or the server's mountd service is not available on
427                      the advertised port.
428
429                      This  option  can  be  used  when mounting an NFS server
430                      through a firewall that blocks the rpcbind protocol.
431
432       mountproto=netid
433                      The transport the NFS client uses to  transmit  requests
434                      to  the NFS server's mountd service when performing this
435                      mount request, and  when  later  unmounting  this  mount
436                      point.
437
438                      netid  may be one of udp, and tcp which use IPv4 address
439                      or, if TI-RPC is built into the mount.nfs command, udp6,
440                      and tcp6 which use IPv6 addresses.
441
442                      This  option  can  be  used  when mounting an NFS server
443                      through a firewall that blocks a  particular  transport.
444                      When  used in combination with the proto option, differ‐
445                      ent transports for mountd requests and NFS requests  can
446                      be  specified.   If  the  server's mountd service is not
447                      available via the specified transport, the mount request
448                      fails.
449
450                      Refer  to  the TRANSPORT METHODS section for more on how
451                      the mountproto mount option  interacts  with  the  proto
452                      mount option.
453
454       mounthost=name The hostname of the host running mountd.  If this option
455                      is not specified, the mount(8) command assumes that  the
456                      mountd service runs on the same host as the NFS service.
457
458       mountvers=n    The  RPC  version  number  used  to contact the server's
459                      mountd.  If this option is  not  specified,  the  client
460                      uses  a  version number appropriate to the requested NFS
461                      version.  This option is useful when multiple  NFS  ser‐
462                      vices are running on the same remote server host.
463
464       namlen=n       The  maximum  length  of  a  pathname  component on this
465                      mount.  If this option is  not  specified,  the  maximum
466                      length  is  negotiated  with  the server. In most cases,
467                      this maximum length is 255 characters.
468
469                      Some early versions of NFS did not support this negotia‐
470                      tion.    Using  this  option  ensures  that  pathconf(3)
471                      reports the proper maximum component length to  applica‐
472                      tions in such cases.
473
474       lock / nolock  Selects whether to use the NLM sideband protocol to lock
475                      files on the server.  If neither option is specified (or
476                      if  lock  is  specified),  NLM  locking is used for this
477                      mount point.  When using the nolock option, applications
478                      can  lock  files,  but such locks provide exclusion only
479                      against other applications running on the  same  client.
480                      Remote applications are not affected by these locks.
481
482                      NLM locking must be disabled with the nolock option when
483                      using NFS to mount /var because /var contains files used
484                      by  the  NLM  implementation on Linux.  Using the nolock
485                      option is also required when  mounting  exports  on  NFS
486                      servers that do not support the NLM protocol.
487
488       cto / nocto    Selects  whether  to  use  close-to-open cache coherence
489                      semantics.  If neither option is specified (or if cto is
490                      specified),  the  client uses close-to-open cache coher‐
491                      ence semantics. If the nocto option  is  specified,  the
492                      client  uses  a non-standard heuristic to determine when
493                      files on the server have changed.
494
495                      Using the nocto option may improve performance for read-
496                      only  mounts, but should be used only if the data on the
497                      server changes only occasionally.  The DATA AND METADATA
498                      COHERENCE  section discusses the behavior of this option
499                      in more detail.
500
501       acl / noacl    Selects whether to use the NFSACL sideband  protocol  on
502                      this  mount  point.   The  NFSACL sideband protocol is a
503                      proprietary protocol implemented in Solaris that manages
504                      Access  Control  Lists. NFSACL was never made a standard
505                      part of the NFS protocol specification.
506
507                      If neither acl nor noacl option is  specified,  the  NFS
508                      client  negotiates  with the server to see if the NFSACL
509                      protocol is supported, and uses it if  the  server  sup‐
510                      ports it.  Disabling the NFSACL sideband protocol may be
511                      necessary if the  negotiation  causes  problems  on  the
512                      client  or server.  Refer to the SECURITY CONSIDERATIONS
513                      section for more details.
514
515       local_lock=mechanism
516                      Specifies whether to use local locking for any  or  both
517                      of  the  flock and the POSIX locking mechanisms.  mecha‐
518                      nism can be one of all, flock,  posix,  or  none.   This
519                      option is supported in kernels 2.6.37 and later.
520
521                      The Linux NFS client provides a way to make locks local.
522                      This means, the applications can lock  files,  but  such
523                      locks  provide exclusion only against other applications
524                      running on the same client. Remote applications are  not
525                      affected by these locks.
526
527                      If  this  option  is not specified, or if none is speci‐
528                      fied, the client assumes that the locks are not local.
529
530                      If all is specified, the client assumes that both  flock
531                      and POSIX locks are local.
532
533                      If  flock  is  specified,  the  client assumes that only
534                      flock locks are local and uses NLM sideband protocol  to
535                      lock files when POSIX locks are used.
536
537                      If  posix  is  specified,  the client assumes that POSIX
538                      locks are local and uses NLM sideband protocol  to  lock
539                      files when flock locks are used.
540
541                      To  support legacy flock behavior similar to that of NFS
542                      clients < 2.6.12, use 'local_lock=flock'. This option is
543                      required  when  exporting  NFS mounts via Samba as Samba
544                      maps Windows  share  mode  locks  as  flock.  Since  NFS
545                      clients  >  2.6.12  implement  flock  by emulating POSIX
546                      locks, this will result in conflicting locks.
547
548                      NOTE: When used together, the 'local_lock' mount  option
549                      will be overridden by 'nolock'/'lock' mount option.
550
551   Options for NFS version 4 only
552       Use  these  options,  along  with  the  options in the first subsection
553       above, for NFS version 4.0 and newer.
554
555       proto=netid    The netid determines the transport that is used to  com‐
556                      municate  with  the  NFS  server.  Supported options are
557                      tcp, tcp6, and rdma.  tcp6 use  IPv6  addresses  and  is
558                      only  available  if support for TI-RPC is built in. Both
559                      others use IPv4 addresses.
560
561                      All NFS version 4 servers are required to  support  TCP,
562                      so  if  this mount option is not specified, the NFS ver‐
563                      sion 4 client uses  the  TCP  protocol.   Refer  to  the
564                      TRANSPORT METHODS section for more details.
565
566       minorversion=n Specifies  the  protocol  minor  version  number.  NFSv4
567                      introduces  "minor  versioning,"  where   NFS   protocol
568                      enhancements  can  be introduced without bumping the NFS
569                      protocol version  number.   Before  kernel  2.6.38,  the
570                      minor  version  is  always  zero, and this option is not
571                      recognized.  After this  kernel,  specifying  "minorver‐
572                      sion=1"  enables  a number of advanced features, such as
573                      NFSv4 sessions.
574
575                      Recent kernels allow the minor version to  be  specified
576                      using   the   vers=  option.   For  example,  specifying
577                      vers=4.1 is  the  same  as  specifying  vers=4,minorver‐
578                      sion=1.
579
580       port=n         The  numeric value of the server's NFS service port.  If
581                      the server's NFS service is not available on the  speci‐
582                      fied port, the mount request fails.
583
584                      If  this  mount  option is not specified, the NFS client
585                      uses the standard NFS port number of 2049 without  first
586                      checking  the  server's rpcbind service.  This allows an
587                      NFS version 4 client to contact an NFS version 4  server
588                      through a firewall that may block rpcbind requests.
589
590                      If  the  specified  port value is 0, then the NFS client
591                      uses the NFS  service  port  number  advertised  by  the
592                      server's  rpcbind  service.   The mount request fails if
593                      the server's  rpcbind  service  is  not  available,  the
594                      server's  NFS service is not registered with its rpcbind
595                      service, or the server's NFS service is not available on
596                      the advertised port.
597
598       cto / nocto    Selects  whether  to  use  close-to-open cache coherence
599                      semantics for NFS directories on this mount  point.   If
600                      neither  cto  nor  nocto is specified, the default is to
601                      use close-to-open cache coherence semantics for directo‐
602                      ries.
603
604                      File  data  caching  behavior  is  not  affected by this
605                      option.  The DATA AND METADATA  COHERENCE  section  dis‐
606                      cusses the behavior of this option in more detail.
607
608       clientaddr=n.n.n.n
609
610       clientaddr=n:n:...:n
611                      Specifies  a  single IPv4 address (in dotted-quad form),
612                      or a non-link-local IPv6 address, that  the  NFS  client
613                      advertises  to  allow servers to perform NFS version 4.0
614                      callback requests against files on this mount point.  If
615                      the   server is unable to establish callback connections
616                      to clients, performance  may  degrade,  or  accesses  to
617                      files  may  temporarily  hang.   Can  specify a value of
618                      IPv4_ANY (0.0.0.0) or equivalent IPv6 any address  which
619                      will  signal to the NFS server that this NFS client does
620                      not want delegations.
621
622                      If this option is not specified,  the  mount(8)  command
623                      attempts  to  discover  an  appropriate callback address
624                      automatically.  The automatic discovery process  is  not
625                      perfect,  however.   In  the presence of multiple client
626                      network interfaces, special routing policies, or  atypi‐
627                      cal  network  topologies,  the  exact address to use for
628                      callbacks may be nontrivial to determine.
629
630                      NFS protocol versions 4.1 and 4.2 use the  client-estab‐
631                      lished  TCP  connection for callback requests, so do not
632                      require the server  to  connect  to  the  client.   This
633                      option is therefore only affect NFS version 4.0 mounts.
634
635       migration / nomigration
636                      Selects whether the client uses an identification string
637                      that is compatible with NFSv4 Transparent  State  Migra‐
638                      tion (TSM).  If the mounted server supports NFSv4 migra‐
639                      tion with TSM, specify the migration option.
640
641                      Some server features misbehave in the face of  a  migra‐
642                      tion-compatible  identification string.  The nomigration
643                      option retains the use of a traditional client  indenti‐
644                      fication  string  which  is  compatible  with legacy NFS
645                      servers.  This is also the behavior if neither option is
646                      specified.   A  client's  open  and lock state cannot be
647                      migrated transparently when it identifies itself  via  a
648                      traditional identification string.
649
650                      This  mount  option  has no effect with NFSv4 minor ver‐
651                      sions newer than zero, which always  use  TSM-compatible
652                      client identification strings.
653

nfs4 FILE SYSTEM TYPE

655       The  nfs4 file system type is an old syntax for specifying NFSv4 usage.
656       It can still be  used  with  all  NFSv4-specific  and  common  options,
657       excepted the nfsvers mount option.
658

MOUNT CONFIGURATION FILE

660       If  the  mount command is configured to do so, all of the mount options
661       described in the  previous  section  can  also  be  configured  in  the
662       /etc/nfsmount.conf file. See nfsmount.conf(5) for details.
663

EXAMPLES

665       To  mount  an  export using NFS version 2, use the nfs file system type
666       and specify the nfsvers=2 mount option.  To mount using NFS version  3,
667       use  the  nfs  file system type and specify the nfsvers=3 mount option.
668       To mount using NFS version 4, use either the nfs file system type, with
669       the nfsvers=4 mount option, or the nfs4 file system type.
670
671       The  following example from an /etc/fstab file causes the mount command
672       to negotiate reasonable defaults for NFS behavior.
673
674               server:/export  /mnt  nfs   defaults                      0 0
675
676       Here is an example from an /etc/fstab file for an NFS version  2  mount
677       over UDP.
678
679               server:/export  /mnt  nfs   nfsvers=2,proto=udp           0 0
680
681       This  example shows how to mount using NFS version 4 over TCP with Ker‐
682       beros 5 mutual authentication.
683
684               server:/export  /mnt  nfs4  sec=krb5                      0 0
685
686       This example shows how to mount using NFS version 4 over TCP with  Ker‐
687       beros 5 privacy or data integrity mode.
688
689               server:/export  /mnt  nfs4  sec=krb5p:krb5i               0 0
690
691       This example can be used to mount /usr over NFS.
692
693               server:/export  /usr  nfs   ro,nolock,nocto,actimeo=3600  0 0
694
695       This  example  shows  how to mount an NFS server using a raw IPv6 link-
696       local address.
697
698               [fe80::215:c5ff:fb3e:e2b1%eth0]:/export /mnt nfs defaults 0 0
699

TRANSPORT METHODS

701       NFS clients send requests to NFS servers via Remote Procedure Calls, or
702       RPCs.  The RPC client discovers remote service endpoints automatically,
703       handles per-request authentication, adjusts request parameters for dif‐
704       ferent  byte  endianness on client and server, and retransmits requests
705       that may have been lost by the network or  server.   RPC  requests  and
706       replies flow over a network transport.
707
708       In  most  cases,  the  mount(8) command, NFS client, and NFS server can
709       automatically negotiate proper transport and data  transfer  size  set‐
710       tings  for  a  mount point.  In some cases, however, it pays to specify
711       these settings explicitly using mount options.
712
713       Traditionally, NFS clients  used  the  UDP  transport  exclusively  for
714       transmitting requests to servers.  Though its implementation is simple,
715       NFS over UDP has many limitations that  prevent  smooth  operation  and
716       good  performance  in  some  common  deployment  environments.  Even an
717       insignificant packet loss  rate  results  in  the  loss  of  whole  NFS
718       requests;  as  such,  retransmit  timeouts are usually in the subsecond
719       range to allow clients to recover quickly from  dropped  requests,  but
720       this can result in extraneous network traffic and server load.
721
722       However,  UDP  can be quite effective in specialized settings where the
723       networks MTU is large relative to NFSs data transfer size (such as net‐
724       work environments that enable jumbo Ethernet frames).  In such environ‐
725       ments, trimming the rsize and wsize settings so that each NFS  read  or
726       write  request  fits in just a few network frames (or even in  a single
727       frame) is advised.  This reduces the probability that  the  loss  of  a
728       single  MTU-sized  network frame results in the loss of an entire large
729       read or write request.
730
731       TCP is the default transport protocol used for all modern NFS implemen‐
732       tations.  It performs well in almost every conceivable network environ‐
733       ment and provides excellent guarantees against data  corruption  caused
734       by  network  unreliability.   TCP is often a requirement for mounting a
735       server through a network firewall.
736
737       Under normal circumstances, networks drop packets much more  frequently
738       than  NFS  servers  drop  requests.   As such, an aggressive retransmit
739       timeout  setting for NFS over TCP is unnecessary. Typical timeout  set‐
740       tings  for  NFS  over  TCP are between one and ten minutes.  After  the
741       client exhausts  its  retransmits  (the  value  of  the  retrans  mount
742       option),  it  assumes a network partition has occurred, and attempts to
743       reconnect to the server on a fresh socket. Since TCP itself makes  net‐
744       work  data  transfer reliable, rsize and wsize can safely be allowed to
745       default to the largest values supported  by  both  client  and  server,
746       independent of the network's MTU size.
747
748   Using the mountproto mount option
749       This  section  applies only to NFS version 2 and version 3 mounts since
750       NFS version 4 does not use a separate protocol for mount requests.
751
752       The Linux NFS client can use a different transport  for  contacting  an
753       NFS server's rpcbind service, its mountd service, its Network Lock Man‐
754       ager (NLM) service, and its NFS service.  The exact transports employed
755       by the Linux NFS client for each mount point depends on the settings of
756       the transport mount options, which include proto, mountproto, udp,  and
757       tcp.
758
759       The  client sends Network Status Manager (NSM) notifications via UDP no
760       matter what transport options are specified, but listens for server NSM
761       notifications  on  both  UDP  and  TCP.   The  NFS  Access Control List
762       (NFSACL) protocol shares the same transport as the main NFS service.
763
764       If no transport options are specified, the Linux NFS client uses UDP to
765       contact the server's mountd service, and TCP to contact its NLM and NFS
766       services by default.
767
768       If the server does not support these transports for these services, the
769       mount(8)  command  attempts  to  discover what the server supports, and
770       then retries the mount request once using  the  discovered  transports.
771       If  the server does not advertise any transport supported by the client
772       or is misconfigured, the mount request fails.  If the bg option  is  in
773       effect,  the  mount command backgrounds itself and continues to attempt
774       the specified mount request.
775
776       When the proto option, the udp option, or the tcp option  is  specified
777       but  the  mountproto  option is not, the specified transport is used to
778       contact both the server's mountd service and for the NLM and  NFS  ser‐
779       vices.
780
781       If the mountproto option is specified but none of the proto, udp or tcp
782       options are specified, then the specified transport  is  used  for  the
783       initial mountd request, but the mount command attempts to discover what
784       the server supports for the NFS protocol, preferring TCP if both trans‐
785       ports are supported.
786
787       If both the mountproto and proto (or udp or tcp) options are specified,
788       then the transport specified by the mountproto option is used  for  the
789       initial mountd request, and the transport specified by the proto option
790       (or the udp or tcp options) is used for NFS, no matter what order these
791       options  appear.   No automatic service discovery is performed if these
792       options are specified.
793
794       If any of the proto, udp, tcp, or mountproto options are specified more
795       than  once on the same mount command line, then the value of the right‐
796       most instance of each of these options takes effect.
797
798   Using NFS over UDP on high-speed links
799       Using NFS over UDP on high-speed links such as Gigabit can cause silent
800       data corruption.
801
802       The  problem  can be triggered at high loads, and is caused by problems
803       in IP fragment reassembly. NFS read and writes typically  transmit  UDP
804       packets of 4 Kilobytes or more, which have to be broken up into several
805       fragments in order to be sent over  the  Ethernet  link,  which  limits
806       packets  to  1500 bytes by default. This process happens at the IP net‐
807       work layer and is called fragmentation.
808
809       In order to identify fragments that belong together, IP assigns a 16bit
810       IP  ID  value  to  each  packet;  fragments generated from the same UDP
811       packet will have the same IP ID.  The  receiving  system  will  collect
812       these  fragments and combine them to form the original UDP packet. This
813       process is called reassembly. The default timeout for packet reassembly
814       is 30 seconds; if the network stack does not receive all fragments of a
815       given packet within this interval, it assumes the  missing  fragment(s)
816       got lost and discards those it already received.
817
818       The  problem  this creates over high-speed links is that it is possible
819       to send more than 65536 packets within 30 seconds. In fact, with  heavy
820       NFS  traffic  one can observe that the IP IDs repeat after about 5 sec‐
821       onds.
822
823       This has serious effects on reassembly:  if  one  fragment  gets  lost,
824       another  fragment  from a different packet but with the same IP ID will
825       arrive within the 30 second timeout, and the network stack will combine
826       these  fragments to form a new packet. Most of the time, network layers
827       above IP will detect this mismatched reassembly - in the case  of  UDP,
828       the  UDP  checksum,  which  is a 16 bit checksum over the entire packet
829       payload, will usually not match, and UDP will discard the bad packet.
830
831       However, the UDP checksum is 16 bit only, so there is a chance of 1  in
832       65536  that it will match even if the packet payload is completely ran‐
833       dom (which very often isn't the case). If that is the case, silent data
834       corruption will occur.
835
836       This potential should be taken seriously, at least on Gigabit Ethernet.
837       Network speeds of 100Mbit/s  should  be  considered  less  problematic,
838       because  with  most  traffic  patterns IP ID wrap around will take much
839       longer than 30 seconds.
840
841       It is therefore strongly recommended to use NFS over TCP  where  possi‐
842       ble, since TCP does not perform fragmentation.
843
844       If  you absolutely have to use NFS over UDP over Gigabit Ethernet, some
845       steps can be taken to mitigate the problem and reduce  the  probability
846       of corruption:
847
848       Jumbo frames:  Many  Gigabit  network cards are capable of transmitting
849                      frames bigger than the 1500 byte  limit  of  traditional
850                      Ethernet,  typically  9000  bytes. Using jumbo frames of
851                      9000 bytes will allow you to run NFS over UDP at a  page
852                      size  of  8K  without  fragmentation. Of course, this is
853                      only feasible if all  involved  stations  support  jumbo
854                      frames.
855
856                      To  enable  a machine to send jumbo frames on cards that
857                      support it, it is sufficient to configure the  interface
858                      for a MTU value of 9000.
859
860       Lower reassembly timeout:
861                      By  lowering this timeout below the time it takes the IP
862                      ID counter to wrap around, incorrect reassembly of frag‐
863                      ments  can  be prevented as well. To do so, simply write
864                      the  new  timeout  value  (in  seconds)  to   the   file
865                      /proc/sys/net/ipv4/ipfrag_time.
866
867                      A value of 2 seconds will greatly reduce the probability
868                      of IPID clashes on a single Gigabit  link,  while  still
869                      allowing  for  a reasonable timeout when receiving frag‐
870                      mented traffic from distant peers.
871

DATA AND METADATA COHERENCE

873       Some modern cluster file systems provide perfect cache coherence  among
874       their  clients.  Perfect cache coherence among disparate NFS clients is
875       expensive to achieve, especially on wide area networks.  As  such,  NFS
876       settles  for  weaker cache coherence that satisfies the requirements of
877       most file sharing types.
878
879   Close-to-open cache consistency
880       Typically file sharing is completely sequential.  First client A  opens
881       a  file,  writes  something to it, then closes it.  Then client B opens
882       the same file, and reads the changes.
883
884       When an application opens a file stored on an NFS version 3 server, the
885       NFS  client  checks that the file exists on the server and is permitted
886       to the opener by sending a GETATTR or ACCESS request.  The  NFS  client
887       sends  these  requests regardless of the freshness of the file's cached
888       attributes.
889
890       When the application closes the file, the NFS client  writes  back  any
891       pending  changes  to  the  file  so  that  the next opener can view the
892       changes.  This also gives the NFS client an opportunity to report write
893       errors to the application via the return code from close(2).
894
895       The  behavior  of  checking  at open time and flushing at close time is
896       referred to as close-to-open cache consistency, or CTO.  It can be dis‐
897       abled for an entire mount point using the nocto mount option.
898
899   Weak cache consistency
900       There  are  still  opportunities  for  a client's data cache to contain
901       stale data.  The NFS version 3 protocol introduced "weak cache  consis‐
902       tency" (also known as WCC) which provides a way of efficiently checking
903       a file's attributes before and after a single request.  This  allows  a
904       client  to  help  identify  changes  that could have been made by other
905       clients.
906
907       When a client is using many concurrent operations that update the  same
908       file  at the same time (for example, during asynchronous write behind),
909       it is still difficult to tell whether it was that client's  updates  or
910       some other client's updates that altered the file.
911
912   Attribute caching
913       Use  the  noac  mount option to achieve attribute cache coherence among
914       multiple clients.  Almost  every  file  system  operation  checks  file
915       attribute  information.  The client keeps this information cached for a
916       period of time to reduce network and server  load.   When  noac  is  in
917       effect,  a client's file attribute cache is disabled, so each operation
918       that needs to check a file's attributes is forced to  go  back  to  the
919       server.   This  permits a client to see changes to a file very quickly,
920       at the cost of many extra network operations.
921
922       Be careful not to confuse the noac option with "no data caching."   The
923       noac  mount  option prevents the client from caching file metadata, but
924       there are still races that may result in data cache incoherence between
925       client and server.
926
927       The  NFS  protocol  is not designed to support true cluster file system
928       cache coherence without some type  of  application  serialization.   If
929       absolute cache coherence among clients is required, applications should
930       use file locking. Alternatively, applications can also open their files
931       with the O_DIRECT flag to disable data caching entirely.
932
933   File timestamp maintenance
934       NFS  servers are responsible for managing file and directory timestamps
935       (atime, ctime, and mtime).  When a file is accessed or  updated  on  an
936       NFS  server,  the file's timestamps are updated just like they would be
937       on a filesystem local to an application.
938
939       NFS clients cache file  attributes,  including  timestamps.   A  file's
940       timestamps are updated on NFS clients when its attributes are retrieved
941       from the NFS server.  Thus there may be  some  delay  before  timestamp
942       updates on an NFS server appear to applications on NFS clients.
943
944       To  comply  with  the  POSIX  filesystem standard, the Linux NFS client
945       relies on NFS servers to keep a file's mtime and ctime timestamps prop‐
946       erly  up  to  date.  It does this by flushing local data changes to the
947       server before reporting mtime to applications via system calls such  as
948       stat(2).
949
950       The  Linux  client  handles  atime  updates more loosely, however.  NFS
951       clients maintain good performance by caching data, but that means  that
952       application  reads,  which  normally update atime, are not reflected to
953       the server where a file's atime is actually maintained.
954
955       Because of this caching behavior, the Linux NFS client does not support
956       generic atime-related mount options.  See mount(8) for details on these
957       options.
958
959       In particular, the atime/noatime, diratime/nodiratime, relatime/norela‐
960       time, and strictatime/nostrictatime mount options have no effect on NFS
961       mounts.
962
963       /proc/mounts may report that the relatime mount option is  set  on  NFS
964       mounts,  but  in fact the atime semantics are always as described here,
965       and are not like relatime semantics.
966
967   Directory entry caching
968       The Linux NFS client caches the result of all NFS LOOKUP requests.   If
969       the  requested  directory  entry  exists  on  the server, the result is
970       referred to as a positive lookup result.  If  the  requested  directory
971       entry  does  not  exist  on  the  server  (that is, the server returned
972       ENOENT), the result is referred to as negative lookup result.
973
974       To detect when directory entries have been  added  or  removed  on  the
975       server,  the  Linux  NFS  client  watches  a directory's mtime.  If the
976       client detects a change in a directory's mtime, the  client  drops  all
977       cached  LOOKUP results for that directory.  Since the directory's mtime
978       is a cached attribute, it may take some time before a client notices it
979       has  changed.  See the descriptions of the acdirmin, acdirmax, and noac
980       mount options for more information about how long a  directory's  mtime
981       is cached.
982
983       Caching directory entries improves the performance of applications that
984       do not share files with applications on other  clients.   Using  cached
985       information  about directories can interfere with applications that run
986       concurrently on multiple clients and need to  detect  the  creation  or
987       removal of files quickly, however.  The lookupcache mount option allows
988       some tuning of directory entry caching behavior.
989
990       Before kernel release 2.6.28, the Linux NFS client tracked  only  posi‐
991       tive  lookup results.  This permitted applications to detect new direc‐
992       tory entries created by other clients  quickly  while  still  providing
993       some of the performance benefits of caching.  If an application depends
994       on the previous lookup caching behavior of the Linux  NFS  client,  you
995       can use lookupcache=positive.
996
997       If  the client ignores its cache and validates every application lookup
998       request with the server, that client can immediately detect when a  new
999       directory  entry  has been either created or removed by another client.
1000       You can specify this behavior using lookupcache=none.   The  extra  NFS
1001       requests  needed  if  the  client  does not cache directory entries can
1002       exact a performance penalty.  Disabling lookup caching should result in
1003       less of a performance penalty than using noac, and has no effect on how
1004       the NFS client caches the attributes of files.
1005
1006   The sync mount option
1007       The NFS client treats the sync mount option differently than some other
1008       file  systems  (refer to mount(8) for a description of the generic sync
1009       and async mount options).  If neither sync nor async is  specified  (or
1010       if the async option is specified), the NFS client delays sending appli‐
1011       cation writes to the server until any of these events occur:
1012
1013              Memory pressure forces reclamation of system memory resources.
1014
1015              An  application  flushes  file  data  explicitly  with  sync(2),
1016              msync(2), or fsync(3).
1017
1018              An application closes a file with close(2).
1019
1020              The file is locked/unlocked via fcntl(2).
1021
1022       In other words, under normal circumstances, data written by an applica‐
1023       tion may not immediately appear on the server that hosts the file.
1024
1025       If the sync option is specified on a mount point, any system call  that
1026       writes data to files on that mount point causes that data to be flushed
1027       to the server before the system call returns  control  to  user  space.
1028       This provides greater data cache coherence among clients, but at a sig‐
1029       nificant performance cost.
1030
1031       Applications can use the O_SYNC open flag to force  application  writes
1032       to  individual files to go to the server immediately without the use of
1033       the sync mount option.
1034
1035   Using file locks with NFS
1036       The Network Lock Manager protocol is a separate sideband protocol  used
1037       to  manage  file locks in NFS version 2 and version 3.  To support lock
1038       recovery after a client or server reboot, a second sideband protocol --
1039       known  as  the Network Status Manager protocol -- is also required.  In
1040       NFS version 4, file locking is supported directly in the main NFS  pro‐
1041       tocol, and the NLM and NSM sideband protocols are not used.
1042
1043       In  most  cases, NLM and NSM services are started automatically, and no
1044       extra configuration is required.  Configure all NFS clients with fully-
1045       qualified  domain  names to ensure that NFS servers can find clients to
1046       notify them of server reboots.
1047
1048       NLM supports advisory file locks only.  To lock NFS files, use fcntl(2)
1049       with  the  F_GETLK  and F_SETLK commands.  The NFS client converts file
1050       locks obtained via flock(2) to advisory locks.
1051
1052       When mounting servers that do not support the  NLM  protocol,  or  when
1053       mounting  an  NFS server through a firewall that blocks the NLM service
1054       port, specify the nolock mount option. NLM  locking  must  be  disabled
1055       with  the  nolock option when using NFS to mount /var because /var con‐
1056       tains files used by the NLM implementation on Linux.
1057
1058       Specifying the nolock option may also be advised to improve the perfor‐
1059       mance  of  a  proprietary application which runs on a single client and
1060       uses file locks extensively.
1061
1062   NFS version 4 caching features
1063       The data and metadata caching behavior of NFS version 4 clients is sim‐
1064       ilar to that of earlier versions.  However, NFS version 4 adds two fea‐
1065       tures that improve cache behavior: change attributes and  file  delega‐
1066       tion.
1067
1068       The  change  attribute is a new part of NFS file and directory metadata
1069       which tracks data changes.  It replaces the use of a  file's  modifica‐
1070       tion  and  change time stamps as a way for clients to validate the con‐
1071       tent of their caches.  Change attributes are independent  of  the  time
1072       stamp resolution on either the server or client, however.
1073
1074       A  file  delegation  is  a contract between an NFS version 4 client and
1075       server that allows the client to treat a  file  temporarily  as  if  no
1076       other client is accessing it.  The server promises to notify the client
1077       (via a callback request) if another  client  attempts  to  access  that
1078       file.  Once a file has been delegated to a client, the client can cache
1079       that file's data  and  metadata  aggressively  without  contacting  the
1080       server.
1081
1082       File  delegations  come in two flavors: read and write.  A read delega‐
1083       tion means that the server notifies the client about any other  clients
1084       that  want  to  write  to  the file.  A write delegation means that the
1085       client gets notified about either read or write accessors.
1086
1087       Servers grant file delegations when a file is opened,  and  can  recall
1088       delegations  at  any  time when another client wants access to the file
1089       that conflicts with any delegations already  granted.   Delegations  on
1090       directories are not supported.
1091
1092       In  order to support delegation callback, the server checks the network
1093       return path to the client during the client's initial contact with  the
1094       server.   If  contact with the client cannot be established, the server
1095       simply does not grant any delegations to that client.
1096

SECURITY CONSIDERATIONS

1098       NFS servers control access to file data, but they depend on  their  RPC
1099       implementation  to provide authentication of NFS requests.  Traditional
1100       NFS access control mimics the standard mode bit access control provided
1101       in local file systems.  Traditional RPC authentication uses a number to
1102       represent each user (usually the user's own uid), a number to represent
1103       the  user's  group  (the  user's  gid), and a set of up to 16 auxiliary
1104       group numbers to represent other groups of which the user may be a mem‐
1105       ber.
1106
1107       Typically,  file  data  and user ID values appear unencrypted (i.e. "in
1108       the clear") on the network.  Moreover, NFS versions 2 and 3  use  sepa‐
1109       rate  sideband protocols for mounting, locking and unlocking files, and
1110       reporting system status of clients and servers.  These auxiliary proto‐
1111       cols use no authentication.
1112
1113       In  addition  to  combining  these sideband protocols with the main NFS
1114       protocol, NFS version 4 introduces more advanced forms of  access  con‐
1115       trol,  authentication, and in-transit data protection.  The NFS version
1116       4 specification mandates support for strong authentication and security
1117       flavors   that  provide  per-RPC  integrity  checking  and  encryption.
1118       Because NFS version 4 combines the function of the  sideband  protocols
1119       into  the main NFS protocol, the new security features apply to all NFS
1120       version 4 operations including  mounting,  file  locking,  and  so  on.
1121       RPCGSS  authentication  can also be used with NFS versions 2 and 3, but
1122       it does not protect their sideband protocols.
1123
1124       The sec mount option specifies the security flavor used for  operations
1125       on  behalf  of users on that NFS mount point.  Specifying sec=krb5 pro‐
1126       vides cryptographic proof of a user's identity  in  each  RPC  request.
1127       This  provides  strong  verification of the identity of users accessing
1128       data on the server.  Note that additional configuration besides  adding
1129       this  mount  option  is  required in order to enable Kerberos security.
1130       Refer to the rpc.gssd(8) man page for details.
1131
1132       Two additional flavors of Kerberos security are  supported:  krb5i  and
1133       krb5p.   The  krb5i security flavor provides a cryptographically strong
1134       guarantee that the data in each RPC request has not been tampered with.
1135       The  krb5p  security  flavor encrypts every RPC request to prevent data
1136       exposure during  network  transit;  however,  expect  some  performance
1137       impact  when  using  integrity checking or encryption.  Similar support
1138       for other forms of cryptographic security is also available.
1139
1140   NFS version 4 filesystem crossing
1141       The NFS version 4 protocol allows a client to renegotiate the  security
1142       flavor  when  the  client  crosses into a new filesystem on the server.
1143       The newly negotiated flavor effects only accesses of the  new  filesys‐
1144       tem.
1145
1146       Such negotiation typically occurs when a client crosses from a server's
1147       pseudo-fs into one of the server's exported physical filesystems, which
1148       often have more restrictive security settings than the pseudo-fs.
1149
1150   NFS version 4 Leases
1151       In  NFS  version  4,  a lease is a period of time during which a server
1152       irrevocably grants a file lock to a client.  If the lease expires,  the
1153       server  is  allowed  to  revoke  that lock.  Clients periodically renew
1154       their leases to prevent lock revocation.
1155
1156       After an NFS version 4 server reboots, each  client  tells  the  server
1157       about all file open and lock state under its lease before operation can
1158       continue.  If the client reboots, the server frees all  open  and  lock
1159       state associated with that client's lease.
1160
1161       As  part  of  establishing  a  lease, therefore, a client must identify
1162       itself to a server.  A fixed string is used to distinguish that  client
1163       from  others,  and  a  changeable verifier is used to indicate when the
1164       client has rebooted.
1165
1166       A client uses a particular security flavor and principal when  perform‐
1167       ing  the  operations  to  establish  a lease.  If two clients happen to
1168       present the same identity string, a server can use their principals  to
1169       detect  that  they  are  different clients, and prevent one client from
1170       interfering with the other's lease.
1171
1172       The Linux NFS client establishes one lease for each server.  Lease man‐
1173       agement  operations, such as lease renewal, are not done on behalf of a
1174       particular file, lock, user, or mount point, but on behalf of the whole
1175       client  that owns that lease.  These operations must use the same secu‐
1176       rity flavor and principal that was used when the lease was established,
1177       even across client reboots.
1178
1179       When  Kerberos  is  configured  on a Linux NFS client (i.e., there is a
1180       /etc/krb5.keytab on that client), the client attempts to use a Kerberos
1181       security  flavor  for  its  lease management operations.  This provides
1182       strong authentication of the client to each  server  it  contacts.   By
1183       default,  the  client  uses  the host/ or nfs/ service principal in its
1184       /etc/krb5.keytab for this purpose.
1185
1186       If the client has Kerberos configured, but the server does not,  or  if
1187       the  client does not have a keytab or the requisite service principals,
1188       the client uses AUTH_SYS and UID 0 for lease management.
1189
1190   Using non-privileged source ports
1191       NFS clients usually communicate with NFS servers via  network  sockets.
1192       Each end of a socket is assigned a port value, which is simply a number
1193       between 1 and 65535 that distinguishes socket endpoints at the same  IP
1194       address.   A  socket  is  uniquely defined by a tuple that includes the
1195       transport protocol (TCP or UDP) and the port values and IP addresses of
1196       both endpoints.
1197
1198       The  NFS  client  can choose any source port value for its sockets, but
1199       usually chooses a privileged port.  A privileged port is a  port  value
1200       less  than  1024.   Only  a  process  with root privileges may create a
1201       socket with a privileged source port.
1202
1203       The exact range of privileged source ports that can be chosen is set by
1204       a pair of sysctls to avoid choosing a well-known port, such as the port
1205       used by ssh.  This means the number of source ports available  for  the
1206       NFS  client, and therefore the number of socket connections that can be
1207       used at the same time, is practically limited to only a few hundred.
1208
1209       As described above, the traditional default NFS authentication  scheme,
1210       known as AUTH_SYS, relies on sending local UID and GID numbers to iden‐
1211       tify users making NFS requests.  An NFS server assumes that if  a  con‐
1212       nection  comes  from  a privileged port, the UID and GID numbers in the
1213       NFS requests on this connection have been verified by the client's ker‐
1214       nel  or  some  other local authority.  This is an easy system to spoof,
1215       but on a trusted physical network between trusted hosts, it is entirely
1216       adequate.
1217
1218       Roughly  speaking,  one  socket is used for each NFS mount point.  If a
1219       client could use non-privileged source ports as  well,  the  number  of
1220       sockets  allowed,  and  thus  the  maximum  number  of concurrent mount
1221       points, would be much larger.
1222
1223       Using non-privileged source ports may compromise server security  some‐
1224       what, since any user on AUTH_SYS mount points can now pretend to be any
1225       other when making NFS requests.  Thus NFS servers do not  support  this
1226       by default.  They explicitly allow it usually via an export option.
1227
1228       To  retain  good security while allowing as many mount points as possi‐
1229       ble, it is best to allow non-privileged client connections only if  the
1230       server and client both require strong authentication, such as Kerberos.
1231
1232   Mounting through a firewall
1233       A  firewall  may reside between an NFS client and server, or the client
1234       or server may block some of its own ports via IP filter rules.   It  is
1235       still  possible  to mount an NFS server through a firewall, though some
1236       of the mount(8) command's automatic service endpoint  discovery  mecha‐
1237       nisms  may  not  work;  this  requires you to provide specific endpoint
1238       details via NFS mount options.
1239
1240       NFS servers normally run a portmapper or rpcbind  daemon  to  advertise
1241       their  service  endpoints to clients. Clients use the rpcbind daemon to
1242       determine:
1243
1244              What network port each RPC-based service is using
1245
1246              What transport protocols each RPC-based service supports
1247
1248       The rpcbind daemon uses a well-known port number (111) to help  clients
1249       find  a service endpoint.  Although NFS often uses a standard port num‐
1250       ber (2049), auxiliary services such as the NLM service can  choose  any
1251       unused port number at random.
1252
1253       Common  firewall  configurations block the well-known rpcbind port.  In
1254       the absense of an rpcbind service, the server administrator  fixes  the
1255       port  number  of  NFS-related  services  so that the firewall can allow
1256       access to specific NFS service ports.  Client administrators then spec‐
1257       ify  the  port number for the mountd service via the mount(8) command's
1258       mountport option.  It may also be necessary to enforce the use  of  TCP
1259       or UDP if the firewall blocks one of those transports.
1260
1261   NFS Access Control Lists
1262       Solaris allows NFS version 3 clients direct access to POSIX Access Con‐
1263       trol Lists stored in its local file systems.  This proprietary sideband
1264       protocol,  known  as  NFSACL,  provides richer access control than mode
1265       bits.  Linux  implements  this  protocol  for  compatibility  with  the
1266       Solaris  NFS  implementation.  The NFSACL protocol never became a stan‐
1267       dard part of the NFS version 3 specification, however.
1268
1269       The NFS version 4 specification mandates a new version of  Access  Con‐
1270       trol Lists that are semantically richer than POSIX ACLs.  NFS version 4
1271       ACLs are not fully compatible with POSIX ACLs; as such,  some  transla‐
1272       tion  between  the  two  is required in an environment that mixes POSIX
1273       ACLs and NFS version 4.
1274

THE REMOUNT OPTION

1276       Generic mount options such as rw and sync can be modified on NFS  mount
1277       points  using the remount option.  See mount(8) for more information on
1278       generic mount options.
1279
1280       With few exceptions, NFS-specific options are not able to  be  modified
1281       during  a  remount.   The underlying transport or NFS version cannot be
1282       changed by a remount, for example.
1283
1284       Performing a remount on an NFS file system mounted with the noac option
1285       may  have unintended consequences.  The noac option is a combination of
1286       the generic option sync, and the NFS-specific option actimeo=0.
1287
1288   Unmounting after a remount
1289       For mount points that use NFS versions 2 or 3, the NFS  umount  subcom‐
1290       mand  depends on knowing the original set of mount options used to per‐
1291       form the MNT operation.  These options are stored on disk  by  the  NFS
1292       mount subcommand, and can be erased by a remount.
1293
1294       To ensure that the saved mount options are not erased during a remount,
1295       specify either the local mount directory, or the  server  hostname  and
1296       export pathname, but not both, during a remount.  For example,
1297
1298               mount -o remount,ro /mnt
1299
1300       merges the mount option ro with the mount options already saved on disk
1301       for the NFS server mounted at /mnt.
1302

FILES

1304       /etc/fstab     file system table
1305
1306       /etc/nfsmount.conf
1307                      Configuration file for NFS mounts
1308

NOTES

1310       Before 2.4.7, the Linux NFS client did not support NFS over TCP.
1311
1312       Before 2.4.20, the Linux NFS  client  used  a  heuristic  to  determine
1313       whether cached file data was still valid rather than using the standard
1314       close-to-open cache coherency method described above.
1315
1316       Starting with 2.4.22, the Linux NFS client employs a Van Jacobsen-based
1317       RTT  estimator  to  determine  retransmit timeout values when using NFS
1318       over UDP.
1319
1320       Before 2.6.0, the Linux NFS client did not support NFS version 4.
1321
1322       Before 2.6.8, the Linux NFS client  used  only  synchronous  reads  and
1323       writes when the rsize and wsize settings were smaller than the system's
1324       page size.
1325
1326       The Linux client's support for protocol versions depend on whether  the
1327       kernel  was  built  with  options  CONFIG_NFS_V2,  CONFIG_NFS_V3,  CON‐
1328       FIG_NFS_V4, CONFIG_NFS_V4_1, and CONFIG_NFS_V4_2.
1329

SEE ALSO

1331       fstab(5), mount(8), umount(8), mount.nfs(5), umount.nfs(5), exports(5),
1332       nfsmount.conf(5),   netconfig(5),   ipv6(7),   nfsd(8),   sm-notify(8),
1333       rpc.statd(8), rpc.idmapd(8), rpc.gssd(8), rpc.svcgssd(8), kerberos(1)
1334
1335       RFC 768 for the UDP specification.
1336       RFC 793 for the TCP specification.
1337       RFC 1094 for the NFS version 2 specification.
1338       RFC 1813 for the NFS version 3 specification.
1339       RFC 1832 for the XDR specification.
1340       RFC 1833 for the RPC bind specification.
1341       RFC 2203 for the RPCSEC GSS API protocol specification.
1342       RFC 7530 for the NFS version 4.0 specification.
1343       RFC 5661 for the NFS version 4.1 specification.
1344       RFC 7862 for the NFS version 4.2 specification.
1345
1346
1347
1348                                9 October 2012                          NFS(5)
Impressum