1NFS(5)                        File Formats Manual                       NFS(5)
2
3
4

NAME

6       nfs - fstab format and options for the nfs file systems
7

SYNOPSIS

9       /etc/fstab
10

DESCRIPTION

12       NFS  is  an  Internet  Standard protocol created by Sun Microsystems in
13       1984. NFS was developed to allow file sharing between systems  residing
14       on  a local area network.  Depending on kernel configuration, the Linux
15       NFS client may support NFS versions 2, 3, 4.0, 4.1, or 4.2.
16
17       The mount(8) command attaches a file system to the system's name  space
18       hierarchy  at  a  given mount point.  The /etc/fstab file describes how
19       mount(8) should assemble a system's file name  hierarchy  from  various
20       independent  file  systems  (including  file  systems  exported  by NFS
21       servers).  Each line in the /etc/fstab file  describes  a  single  file
22       system,  its  mount  point, and a set of default mount options for that
23       mount point.
24
25       For NFS file system mounts, a line in the /etc/fstab file specifies the
26       server  name,  the path name of the exported server directory to mount,
27       the local directory that is the mount point, the type  of  file  system
28       that is being mounted, and a list of mount options that control the way
29       the filesystem is mounted and how the NFS client behaves when accessing
30       files on this mount point.  The fifth and sixth fields on each line are
31       not used by NFS, thus conventionally each contain the digit  zero.  For
32       example:
33
34               server:path   /mountpoint   fstype   option,option,...   0 0
35
36       The  server's  hostname  and  export pathname are separated by a colon,
37       while the mount options are separated by commas. The  remaining  fields
38       are separated by blanks or tabs.
39
40       The server's hostname can be an unqualified hostname, a fully qualified
41       domain name, a dotted quad IPv4 address, or an IPv6 address enclosed in
42       square  brackets.   Link-local  and  site-local  IPv6 addresses must be
43       accompanied by an interface identifier.  See  ipv6(7)  for  details  on
44       specifying raw IPv6 addresses.
45
46       The  fstype  field  contains  "nfs".   Use  of  the  "nfs4"  fstype  in
47       /etc/fstab is deprecated.
48

MOUNT OPTIONS

50       Refer to mount(8) for a description of generic mount options  available
51       for  all file systems. If you do not need to specify any mount options,
52       use the generic option defaults in /etc/fstab.
53
54   Options supported by all versions
55       These options are valid to use with any NFS version.
56
57       nfsvers=n      The NFS protocol version  number  used  to  contact  the
58                      server's  NFS  service.   If the server does not support
59                      the requested version, the mount request fails.  If this
60                      option  is  not  specified, the client tries version 4.2
61                      first, then negotiates down until  it  finds  a  version
62                      supported by the server.
63
64       vers=n         This option is an alternative to the nfsvers option.  It
65                      is included for compatibility with other operating  sys‐
66                      tems
67
68       soft / hard    Determines the recovery behavior of the NFS client after
69                      an NFS request times out.  If neither option  is  speci‐
70                      fied  (or if the hard option is specified), NFS requests
71                      are retried indefinitely.  If the soft option is  speci‐
72                      fied,  then  the  NFS  client fails an NFS request after
73                      retrans retransmissions have been sent, causing the  NFS
74                      client to return an error to the calling application.
75
76                      NB:  A  so-called  "soft"  timeout can cause silent data
77                      corruption in certain  cases.  As  such,  use  the  soft
78                      option only when client responsiveness is more important
79                      than data integrity.  Using NFS over TCP  or  increasing
80                      the value of the retrans option may mitigate some of the
81                      risks of using the soft option.
82
83       softreval / nosoftreval
84                      In cases where the NFS server is down, it may be  useful
85                      to  allow  the  NFS client to continue to serve up paths
86                      and attributes from  cache  after  retrans  attempts  to
87                      revalidate  that  cache  have  timed out.  This may, for
88                      instance, be helpful when trying to unmount a filesystem
89                      tree from a server that is permanently down.
90
91                      It  is possible to combine softreval with the soft mount
92                      option, in which case operations that cannot  be  served
93                      up  from  cache  will time out and return an error after
94                      retrans attempts. The combination with the default  hard
95                      mount option implies those uncached operations will con‐
96                      tinue to retry until a response  is  received  from  the
97                      server.
98
99                      Note: the default mount option is nosoftreval which dis‐
100                      allows fallback to cache when  revalidation  fails,  and
101                      instead  follows  the  behavior  dictated by the hard or
102                      soft mount option.
103
104       intr / nointr  This option is provided for backward compatibility.   It
105                      is ignored after kernel 2.6.25.
106
107       timeo=n        The  time  in  deciseconds  (tenths of a second) the NFS
108                      client waits for a response before  it  retries  an  NFS
109                      request.
110
111                      For NFS over TCP the default timeo value is 600 (60 sec‐
112                      onds).  The NFS client performs  linear  backoff:  After
113                      each retransmission the timeout is increased by timeo up
114                      to the maximum of 600 seconds.
115
116                      However, for NFS over UDP, the client uses  an  adaptive
117                      algorithm  to  estimate an appropriate timeout value for
118                      frequently used request types (such as  READ  and  WRITE
119                      requests),  but  uses the timeo setting for infrequently
120                      used request types (such as FSINFO  requests).   If  the
121                      timeo option is not specified, infrequently used request
122                      types  are  retried  after  1.1  seconds.   After   each
123                      retransmission,  the  NFS client doubles the timeout for
124                      that request, up to a maximum timeout length of 60  sec‐
125                      onds.
126
127       retrans=n      The  number  of  times  the NFS client retries a request
128                      before it  attempts  further  recovery  action.  If  the
129                      retrans  option  is  not specified, the NFS client tries
130                      each UDP request three times and each TCP request twice.
131
132                      The NFS client generates a "server not responding"  mes‐
133                      sage after retrans retries, then attempts further recov‐
134                      ery (depending on whether the hard mount  option  is  in
135                      effect).
136
137       rsize=n        The maximum number of bytes in each network READ request
138                      that the NFS client can receive when reading data from a
139                      file  on an NFS server.  The actual data payload size of
140                      each NFS READ request is equal to or  smaller  than  the
141                      rsize setting. The largest read payload supported by the
142                      Linux NFS client is 1,048,576 bytes (one megabyte).
143
144                      The rsize value is a positive integral multiple of 1024.
145                      Specified rsize values lower than 1024 are replaced with
146                      4096; values  larger  than  1048576  are  replaced  with
147                      1048576.  If  a  specified value is within the supported
148                      range but not a multiple of 1024, it is rounded down  to
149                      the nearest multiple of 1024.
150
151                      If  an rsize value is not specified, or if the specified
152                      rsize value is  larger  than  the  maximum  that  either
153                      client  or  server  can  support,  the client and server
154                      negotiate the largest rsize value  that  they  can  both
155                      support.
156
157                      The rsize mount option as specified on the mount(8) com‐
158                      mand line appears in the /etc/mtab  file.  However,  the
159                      effective  rsize  value  negotiated  by  the  client and
160                      server is reported in the /proc/mounts file.
161
162       wsize=n        The maximum number of bytes per  network  WRITE  request
163                      that the NFS client can send when writing data to a file
164                      on an NFS server. The actual data payload size  of  each
165                      NFS  WRITE request is equal to or smaller than the wsize
166                      setting. The largest  write  payload  supported  by  the
167                      Linux NFS client is 1,048,576 bytes (one megabyte).
168
169                      Similar  to  rsize , the wsize value is a positive inte‐
170                      gral multiple of 1024.   Specified  wsize  values  lower
171                      than  1024  are  replaced  with 4096; values larger than
172                      1048576 are replaced with 1048576. If a specified  value
173                      is  within  the  supported  range  but not a multiple of
174                      1024, it is rounded down  to  the  nearest  multiple  of
175                      1024.
176
177                      If  a  wsize value is not specified, or if the specified
178                      wsize value is  larger  than  the  maximum  that  either
179                      client  or  server  can  support,  the client and server
180                      negotiate the largest wsize value  that  they  can  both
181                      support.
182
183                      The wsize mount option as specified on the mount(8) com‐
184                      mand line appears in the /etc/mtab  file.  However,  the
185                      effective  wsize  value  negotiated  by  the  client and
186                      server is reported in the /proc/mounts file.
187
188       ac / noac      Selects whether the client may cache file attributes. If
189                      neither option is specified (or if ac is specified), the
190                      client caches file attributes.
191
192                      To  improve  performance,   NFS   clients   cache   file
193                      attributes.  Every few seconds, an NFS client checks the
194                      server's version of each file's attributes for  updates.
195                      Changes  that  occur on the server in those small inter‐
196                      vals remain  undetected  until  the  client  checks  the
197                      server  again.  The  noac  option  prevents clients from
198                      caching file attributes so that  applications  can  more
199                      quickly detect file changes on the server.
200
201                      In  addition  to preventing the client from caching file
202                      attributes, the noac option forces application writes to
203                      become  synchronous  so  that  local  changes  to a file
204                      become visible on the  server  immediately.   That  way,
205                      other clients can quickly detect recent writes when they
206                      check the file's attributes.
207
208                      Using the noac option provides greater  cache  coherence
209                      among  NFS  clients  accessing  the  same  files, but it
210                      extracts a significant performance  penalty.   As  such,
211                      judicious  use  of  file  locking is encouraged instead.
212                      The DATA  AND  METADATA  COHERENCE  section  contains  a
213                      detailed discussion of these trade-offs.
214
215       acregmin=n     The minimum time (in seconds) that the NFS client caches
216                      attributes of a regular file before  it  requests  fresh
217                      attribute  information from a server.  If this option is
218                      not specified, the NFS client uses a  3-second  minimum.
219                      See  the  DATA AND METADATA COHERENCE section for a full
220                      discussion of attribute caching.
221
222       acregmax=n     The maximum time (in seconds) that the NFS client caches
223                      attributes  of  a  regular file before it requests fresh
224                      attribute information from a server.  If this option  is
225                      not  specified, the NFS client uses a 60-second maximum.
226                      See the DATA AND METADATA COHERENCE section for  a  full
227                      discussion of attribute caching.
228
229       acdirmin=n     The minimum time (in seconds) that the NFS client caches
230                      attributes of  a  directory  before  it  requests  fresh
231                      attribute  information from a server.  If this option is
232                      not specified, the NFS client uses a 30-second  minimum.
233                      See  the  DATA AND METADATA COHERENCE section for a full
234                      discussion of attribute caching.
235
236       acdirmax=n     The maximum time (in seconds) that the NFS client caches
237                      attributes  of  a  directory  before  it  requests fresh
238                      attribute information from a server.  If this option  is
239                      not  specified, the NFS client uses a 60-second maximum.
240                      See the DATA AND METADATA COHERENCE section for  a  full
241                      discussion of attribute caching.
242
243       actimeo=n      Using  actimeo sets all of acregmin, acregmax, acdirmin,
244                      and acdirmax to the same value.  If this option  is  not
245                      specified,  the NFS client uses the defaults for each of
246                      these options listed above.
247
248       bg / fg        Determines  how  the  mount(8)  command  behaves  if  an
249                      attempt  to mount an export fails.  The fg option causes
250                      mount(8) to exit with an error status if any part of the
251                      mount  request  times  out  or  fails outright.  This is
252                      called a "foreground" mount, and is the default behavior
253                      if neither the fg nor bg mount option is specified.
254
255                      If  the  bg  option  is  specified, a timeout or failure
256                      causes the mount(8) command to fork a child  which  con‐
257                      tinues to attempt to mount the export.  The parent imme‐
258                      diately returns with a zero exit code.  This is known as
259                      a "background" mount.
260
261                      If  the  local  mount  point  directory  is missing, the
262                      mount(8) command acts as if the mount request timed out.
263                      This  permits  nested NFS mounts specified in /etc/fstab
264                      to proceed in any order  during  system  initialization,
265                      even  if some NFS servers are not yet available.  Alter‐
266                      natively these issues can be addressed  using  an  auto‐
267                      mounter (refer to automount(8) for details).
268
269       nconnect=n     When  using  a connection oriented protocol such as TCP,
270                      it may sometimes be advantageous to set up multiple con‐
271                      nections between the client and server. For instance, if
272                      your clients and/or servers are equipped  with  multiple
273                      network  interface  cards (NICs), using multiple connec‐
274                      tions to spread the load  may  improve  overall  perfor‐
275                      mance.   In  such  cases, the nconnect option allows the
276                      user to specify the number of connections that should be
277                      established  between the client and server up to a limit
278                      of 16.
279
280                      Note that the nconnect option may also be used  by  some
281                      pNFS drivers to decide how many connections to set up to
282                      the data servers.
283
284       rdirplus / nordirplus
285                      Selects  whether  to  use  NFS  v3  or  v4   READDIRPLUS
286                      requests.   If  this  option  is  not specified, the NFS
287                      client uses READDIRPLUS requests on NFS v3 or v4  mounts
288                      to  read  small  directories.  Some applications perform
289                      better if the client uses only READDIR requests for  all
290                      directories.
291
292       retry=n        The  number of minutes that the mount(8) command retries
293                      an NFS mount operation in the foreground  or  background
294                      before  giving up.  If this option is not specified, the
295                      default value for foreground mounts is  2  minutes,  and
296                      the default value for background mounts is 10000 minutes
297                      (80 minutes shy of one week).  If a  value  of  zero  is
298                      specified,  the mount(8) command exits immediately after
299                      the first failure.
300
301                      Note that this only affects how many  retries  are  made
302                      and  doesn't affect the delay caused by each retry.  For
303                      UDP each retry takes the time determined  by  the  timeo
304                      and  retrans  options,  which by default will be about 7
305                      seconds.  For TCP the default is 3 minutes,  but  system
306                      TCP connection timeouts will sometimes limit the timeout
307                      of each retransmission to around 2 minutes.
308
309       sec=flavors    A colon-separated list of one or more  security  flavors
310                      to use for accessing files on the mounted export. If the
311                      server does not support any of these flavors, the  mount
312                      operation  fails.   If sec= is not specified, the client
313                      attempts to find a security flavor that both the  client
314                      and  the  server supports.  Valid flavors are none, sys,
315                      krb5, krb5i, and krb5p.  Refer to the SECURITY CONSIDER‐
316                      ATIONS section for details.
317
318       sharecache / nosharecache
319                      Determines  how  the  client's  data cache and attribute
320                      cache are shared when mounting the same export more than
321                      once  concurrently.  Using the same cache reduces memory
322                      requirements on the client and presents  identical  file
323                      contents  to  applications  when the same remote file is
324                      accessed via different mount points.
325
326                      If neither option is specified,  or  if  the  sharecache
327                      option is specified, then a single cache is used for all
328                      mount points  that  access  the  same  export.   If  the
329                      nosharecache  option is specified, then that mount point
330                      gets a unique cache.  Note that when data and  attribute
331                      caches  are  shared,  the  mount  options from the first
332                      mount point take effect for subsequent concurrent mounts
333                      of the same export.
334
335                      As  of kernel 2.6.18, the behavior specified by noshare‐
336                      cache is legacy caching behavior. This is  considered  a
337                      data  risk since multiple cached copies of the same file
338                      on the same client can become out of  sync  following  a
339                      local update of one of the copies.
340
341       resvport / noresvport
342                      Specifies whether the NFS client should use a privileged
343                      source port when communicating with an  NFS  server  for
344                      this  mount  point.  If this option is not specified, or
345                      the resvport option is specified, the NFS client uses  a
346                      privileged  source  port.   If  the noresvport option is
347                      specified, the NFS client uses a  non-privileged  source
348                      port.   This  option  is supported in kernels 2.6.28 and
349                      later.
350
351                      Using non-privileged source  ports  helps  increase  the
352                      maximum  number of NFS mount points allowed on a client,
353                      but NFS servers must be configured to allow  clients  to
354                      connect via non-privileged source ports.
355
356                      Refer  to the SECURITY CONSIDERATIONS section for impor‐
357                      tant details.
358
359       lookupcache=mode
360                      Specifies how the kernel manages its cache of  directory
361                      entries  for  a  given  mount point.  mode can be one of
362                      all, none, pos, or positive.  This option  is  supported
363                      in kernels 2.6.28 and later.
364
365                      The Linux NFS client caches the result of all NFS LOOKUP
366                      requests.  If the requested directory  entry  exists  on
367                      the  server,  the result is referred to as positive.  If
368                      the requested directory entry  does  not  exist  on  the
369                      server, the result is referred to as negative.
370
371                      If this option is not specified, or if all is specified,
372                      the client assumes both types of directory cache entries
373                      are   valid   until   their  parent  directory's  cached
374                      attributes expire.
375
376                      If pos or positive is specified, the client assumes pos‐
377                      itive  entries  are valid until their parent directory's
378                      cached attributes expire, but always  revalidates  nega‐
379                      tive entires before an application can use them.
380
381                      If  none is specified, the client revalidates both types
382                      of directory cache entries before an application can use
383                      them.   This  permits quick detection of files that were
384                      created or removed by  other  clients,  but  can  impact
385                      application and server performance.
386
387                      The  DATA  AND  METADATA  COHERENCE  section  contains a
388                      detailed discussion of these trade-offs.
389
390       fsc / nofsc    Enable/Disables the cache of (read-only) data  pages  to
391                      the   local   disk  using  the  FS-Cache  facility.  See
392                      cachefilesd(8)      and       <kernel_soruce>/Documenta‐
393                      tion/filesystems/caching  for detail on how to configure
394                      the FS-Cache facility.  Default value is nofsc.
395
396   Options for NFS versions 2 and 3 only
397       Use these options, along with the options in the above subsection,  for
398       NFS versions 2 and 3 only.
399
400       proto=netid    The  netid determines the transport that is used to com‐
401                      municate with the NFS  server.   Available  options  are
402                      udp,  udp6,  tcp,  tcp6, and rdma.  Those which end in 6
403                      use IPv6 addresses and are only available if support for
404                      TI-RPC is built in. Others use IPv4 addresses.
405
406                      Each  transport  protocol uses different default retrans
407                      and timeo settings.  Refer to the description  of  these
408                      two mount options for details.
409
410                      In  addition to controlling how the NFS client transmits
411                      requests to the server, this mount option also  controls
412                      how  the mount(8) command communicates with the server's
413                      rpcbind and mountd services.  Specifying  a  netid  that
414                      uses  TCP  forces  all traffic from the mount(8) command
415                      and the NFS client to use TCP.  Specifying a netid  that
416                      uses UDP forces all traffic types to use UDP.
417
418                      Before  using NFS over UDP, refer to the TRANSPORT METH‐
419                      ODS section.
420
421                      If the proto mount option is not specified, the mount(8)
422                      command  discovers  which  protocols the server supports
423                      and chooses an appropriate transport for  each  service.
424                      Refer to the TRANSPORT METHODS section for more details.
425
426       udp            The   udp   option   is  an  alternative  to  specifying
427                      proto=udp.  It is included for compatibility with  other
428                      operating systems.
429
430                      Before  using NFS over UDP, refer to the TRANSPORT METH‐
431                      ODS section.
432
433       tcp            The  tcp  option  is  an   alternative   to   specifying
434                      proto=tcp.   It is included for compatibility with other
435                      operating systems.
436
437       rdma           The  rdma  option  is  an  alternative   to   specifying
438                      proto=rdma.
439
440       port=n         The  numeric value of the server's NFS service port.  If
441                      the server's NFS service is not available on the  speci‐
442                      fied port, the mount request fails.
443
444                      If  this  option  is  not specified, or if the specified
445                      port value is 0, then the NFS client uses the  NFS  ser‐
446                      vice port number advertised by the server's rpcbind ser‐
447                      vice.  The mount request fails if the  server's  rpcbind
448                      service  is  not  available, the server's NFS service is
449                      not registered with its rpcbind service, or the server's
450                      NFS service is not available on the advertised port.
451
452       mountport=n    The  numeric  value of the server's mountd port.  If the
453                      server's mountd service is not available on  the  speci‐
454                      fied port, the mount request fails.
455
456                      If  this  option  is  not specified, or if the specified
457                      port value is 0, then  the  mount(8)  command  uses  the
458                      mountd  service  port  number advertised by the server's
459                      rpcbind  service.   The  mount  request  fails  if   the
460                      server's  rpcbind service is not available, the server's
461                      mountd service is not registered with its  rpcbind  ser‐
462                      vice, or the server's mountd service is not available on
463                      the advertised port.
464
465                      This option can be used  when  mounting  an  NFS  server
466                      through a firewall that blocks the rpcbind protocol.
467
468       mountproto=netid
469                      The  transport  the NFS client uses to transmit requests
470                      to the NFS server's mountd service when performing  this
471                      mount  request,  and  when  later  unmounting this mount
472                      point.
473
474                      netid may be one of udp, and tcp which use IPv4  address
475                      or, if TI-RPC is built into the mount.nfs command, udp6,
476                      and tcp6 which use IPv6 addresses.
477
478                      This option can be used  when  mounting  an  NFS  server
479                      through  a  firewall that blocks a particular transport.
480                      When used in combination with the proto option,  differ‐
481                      ent  transports for mountd requests and NFS requests can
482                      be specified.  If the server's  mountd  service  is  not
483                      available via the specified transport, the mount request
484                      fails.
485
486                      Refer to the TRANSPORT METHODS section for more  on  how
487                      the  mountproto  mount  option  interacts with the proto
488                      mount option.
489
490       mounthost=name The hostname of the host running mountd.  If this option
491                      is  not specified, the mount(8) command assumes that the
492                      mountd service runs on the same host as the NFS service.
493
494       mountvers=n    The RPC version number  used  to  contact  the  server's
495                      mountd.   If  this  option  is not specified, the client
496                      uses a version number appropriate to the  requested  NFS
497                      version.   This  option is useful when multiple NFS ser‐
498                      vices are running on the same remote server host.
499
500       namlen=n       The maximum length  of  a  pathname  component  on  this
501                      mount.   If  this  option  is not specified, the maximum
502                      length is negotiated with the  server.  In  most  cases,
503                      this maximum length is 255 characters.
504
505                      Some early versions of NFS did not support this negotia‐
506                      tion.   Using  this  option  ensures  that   pathconf(3)
507                      reports  the proper maximum component length to applica‐
508                      tions in such cases.
509
510       lock / nolock  Selects whether to use the NLM sideband protocol to lock
511                      files on the server.  If neither option is specified (or
512                      if lock is specified), NLM  locking  is  used  for  this
513                      mount point.  When using the nolock option, applications
514                      can lock files, but such locks  provide  exclusion  only
515                      against  other  applications running on the same client.
516                      Remote applications are not affected by these locks.
517
518                      NLM locking must be disabled with the nolock option when
519                      using NFS to mount /var because /var contains files used
520                      by the NLM implementation on Linux.   Using  the  nolock
521                      option  is  also  required  when mounting exports on NFS
522                      servers that do not support the NLM protocol.
523
524       cto / nocto    Selects whether to  use  close-to-open  cache  coherence
525                      semantics.  If neither option is specified (or if cto is
526                      specified), the client uses close-to-open  cache  coher‐
527                      ence  semantics.  If  the nocto option is specified, the
528                      client uses a non-standard heuristic to  determine  when
529                      files on the server have changed.
530
531                      Using the nocto option may improve performance for read-
532                      only mounts, but should be used only if the data on  the
533                      server changes only occasionally.  The DATA AND METADATA
534                      COHERENCE section discusses the behavior of this  option
535                      in more detail.
536
537       acl / noacl    Selects  whether  to use the NFSACL sideband protocol on
538                      this mount point.  The NFSACL  sideband  protocol  is  a
539                      proprietary protocol implemented in Solaris that manages
540                      Access Control Lists. NFSACL was never made  a  standard
541                      part of the NFS protocol specification.
542
543                      If  neither  acl  nor noacl option is specified, the NFS
544                      client negotiates with the server to see if  the  NFSACL
545                      protocol  is  supported,  and uses it if the server sup‐
546                      ports it.  Disabling the NFSACL sideband protocol may be
547                      necessary  if  the  negotiation  causes  problems on the
548                      client or server.  Refer to the SECURITY  CONSIDERATIONS
549                      section for more details.
550
551       local_lock=mechanism
552                      Specifies  whether  to use local locking for any or both
553                      of the flock and the POSIX locking  mechanisms.   mecha‐
554                      nism  can  be  one  of all, flock, posix, or none.  This
555                      option is supported in kernels 2.6.37 and later.
556
557                      The Linux NFS client provides a way to make locks local.
558                      This  means,  the  applications can lock files, but such
559                      locks provide exclusion only against other  applications
560                      running  on the same client. Remote applications are not
561                      affected by these locks.
562
563                      If this option is not specified, or if  none  is  speci‐
564                      fied, the client assumes that the locks are not local.
565
566                      If  all is specified, the client assumes that both flock
567                      and POSIX locks are local.
568
569                      If flock is specified,  the  client  assumes  that  only
570                      flock  locks are local and uses NLM sideband protocol to
571                      lock files when POSIX locks are used.
572
573                      If posix is specified, the  client  assumes  that  POSIX
574                      locks  are  local and uses NLM sideband protocol to lock
575                      files when flock locks are used.
576
577                      To support legacy flock behavior similar to that of  NFS
578                      clients < 2.6.12, use 'local_lock=flock'. This option is
579                      required when exporting NFS mounts via  Samba  as  Samba
580                      maps  Windows  share  mode  locks  as  flock.  Since NFS
581                      clients > 2.6.12  implement  flock  by  emulating  POSIX
582                      locks, this will result in conflicting locks.
583
584                      NOTE:  When used together, the 'local_lock' mount option
585                      will be overridden by 'nolock'/'lock' mount option.
586
587   Options for NFS version 4 only
588       Use these options, along with  the  options  in  the  first  subsection
589       above, for NFS version 4.0 and newer.
590
591       proto=netid    The  netid determines the transport that is used to com‐
592                      municate with the NFS  server.   Supported  options  are
593                      tcp,  tcp6,  and  rdma.   tcp6 use IPv6 addresses and is
594                      only available if support for TI-RPC is built  in.  Both
595                      others use IPv4 addresses.
596
597                      All  NFS  version 4 servers are required to support TCP,
598                      so if this mount option is not specified, the  NFS  ver‐
599                      sion  4  client  uses  the  TCP  protocol.  Refer to the
600                      TRANSPORT METHODS section for more details.
601
602       minorversion=n Specifies the  protocol  minor  version  number.   NFSv4
603                      introduces   "minor   versioning,"  where  NFS  protocol
604                      enhancements can be introduced without bumping  the  NFS
605                      protocol  version  number.   Before  kernel  2.6.38, the
606                      minor version is always zero, and  this  option  is  not
607                      recognized.   After  this  kernel, specifying "minorver‐
608                      sion=1" enables a number of advanced features,  such  as
609                      NFSv4 sessions.
610
611                      Recent  kernels  allow the minor version to be specified
612                      using  the  vers=  option.   For   example,   specifying
613                      vers=4.1  is  the  same  as  specifying vers=4,minorver‐
614                      sion=1.
615
616       port=n         The numeric value of the server's NFS service port.   If
617                      the  server's NFS service is not available on the speci‐
618                      fied port, the mount request fails.
619
620                      If this mount option is not specified,  the  NFS  client
621                      uses  the standard NFS port number of 2049 without first
622                      checking the server's rpcbind service.  This  allows  an
623                      NFS  version 4 client to contact an NFS version 4 server
624                      through a firewall that may block rpcbind requests.
625
626                      If the specified port value is 0, then  the  NFS  client
627                      uses  the  NFS  service  port  number  advertised by the
628                      server's rpcbind service.  The mount  request  fails  if
629                      the  server's  rpcbind  service  is  not  available, the
630                      server's NFS service is not registered with its  rpcbind
631                      service, or the server's NFS service is not available on
632                      the advertised port.
633
634       cto / nocto    Selects whether to  use  close-to-open  cache  coherence
635                      semantics  for  NFS directories on this mount point.  If
636                      neither cto nor nocto is specified, the  default  is  to
637                      use close-to-open cache coherence semantics for directo‐
638                      ries.
639
640                      File data caching  behavior  is  not  affected  by  this
641                      option.   The  DATA  AND METADATA COHERENCE section dis‐
642                      cusses the behavior of this option in more detail.
643
644       clientaddr=n.n.n.n
645
646       clientaddr=n:n:...:n
647                      Specifies a single IPv4 address (in  dotted-quad  form),
648                      or  a  non-link-local  IPv6 address, that the NFS client
649                      advertises to allow servers to perform NFS  version  4.0
650                      callback  requests against files on this mount point. If
651                      the  server is unable to establish callback  connections
652                      to  clients,  performance  may  degrade,  or accesses to
653                      files may temporarily hang.   Can  specify  a  value  of
654                      IPv4_ANY  (0.0.0.0) or equivalent IPv6 any address which
655                      will signal to the NFS server that this NFS client  does
656                      not want delegations.
657
658                      If  this  option  is not specified, the mount(8) command
659                      attempts to discover  an  appropriate  callback  address
660                      automatically.   The  automatic discovery process is not
661                      perfect, however.  In the presence  of  multiple  client
662                      network  interfaces, special routing policies, or atypi‐
663                      cal network topologies, the exact  address  to  use  for
664                      callbacks may be nontrivial to determine.
665
666                      NFS  protocol versions 4.1 and 4.2 use the client-estab‐
667                      lished TCP connection for callback requests, so  do  not
668                      require  the  server  to  connect  to  the client.  This
669                      option is therefore only affect NFS version 4.0 mounts.
670
671       migration / nomigration
672                      Selects whether the client uses an identification string
673                      that  is  compatible with NFSv4 Transparent State Migra‐
674                      tion (TSM).  If the mounted server supports NFSv4 migra‐
675                      tion with TSM, specify the migration option.
676
677                      Some  server  features misbehave in the face of a migra‐
678                      tion-compatible identification string.  The  nomigration
679                      option  retains the use of a traditional client indenti‐
680                      fication string which  is  compatible  with  legacy  NFS
681                      servers.  This is also the behavior if neither option is
682                      specified.  A client's open and  lock  state  cannot  be
683                      migrated  transparently  when it identifies itself via a
684                      traditional identification string.
685
686                      This mount option has no effect with  NFSv4  minor  ver‐
687                      sions  newer  than zero, which always use TSM-compatible
688                      client identification strings.
689

nfs4 FILE SYSTEM TYPE

691       The nfs4 file system type is an old syntax for specifying NFSv4  usage.
692       It  can  still  be  used  with  all  NFSv4-specific and common options,
693       excepted the nfsvers mount option.
694

MOUNT CONFIGURATION FILE

696       If the mount command is configured to do so, all of the  mount  options
697       described  in  the  previous  section  can  also  be  configured in the
698       /etc/nfsmount.conf file. See nfsmount.conf(5) for details.
699

EXAMPLES

701       To mount an export using NFS version 2, use the nfs  file  system  type
702       and  specify the nfsvers=2 mount option.  To mount using NFS version 3,
703       use the nfs file system type and specify the  nfsvers=3  mount  option.
704       To mount using NFS version 4, use either the nfs file system type, with
705       the nfsvers=4 mount option, or the nfs4 file system type.
706
707       The following example from an /etc/fstab file causes the mount  command
708       to negotiate reasonable defaults for NFS behavior.
709
710               server:/export  /mnt  nfs   defaults                      0 0
711
712       Here  is  an example from an /etc/fstab file for an NFS version 2 mount
713       over UDP.
714
715               server:/export  /mnt  nfs   nfsvers=2,proto=udp           0 0
716
717       This example shows how to mount using NFS version 4 over TCP with  Ker‐
718       beros 5 mutual authentication.
719
720               server:/export  /mnt  nfs4  sec=krb5                      0 0
721
722       This  example shows how to mount using NFS version 4 over TCP with Ker‐
723       beros 5 privacy or data integrity mode.
724
725               server:/export  /mnt  nfs4  sec=krb5p:krb5i               0 0
726
727       This example can be used to mount /usr over NFS.
728
729               server:/export  /usr  nfs   ro,nolock,nocto,actimeo=3600  0 0
730
731       This example shows how to mount an NFS server using a  raw  IPv6  link-
732       local address.
733
734               [fe80::215:c5ff:fb3e:e2b1%eth0]:/export /mnt nfs defaults 0 0
735

TRANSPORT METHODS

737       NFS clients send requests to NFS servers via Remote Procedure Calls, or
738       RPCs.  The RPC client discovers remote service endpoints automatically,
739       handles per-request authentication, adjusts request parameters for dif‐
740       ferent byte endianness on client and server, and  retransmits  requests
741       that  may  have  been  lost by the network or server.  RPC requests and
742       replies flow over a network transport.
743
744       In most cases, the mount(8) command, NFS client,  and  NFS  server  can
745       automatically  negotiate  proper  transport and data transfer size set‐
746       tings for a mount point.  In some cases, however, it  pays  to  specify
747       these settings explicitly using mount options.
748
749       Traditionally,  NFS  clients  used  the  UDP  transport exclusively for
750       transmitting requests to servers.  Though its implementation is simple,
751       NFS  over  UDP  has  many limitations that prevent smooth operation and
752       good performance in  some  common  deployment  environments.   Even  an
753       insignificant  packet  loss  rate  results  in  the  loss  of whole NFS
754       requests; as such, retransmit timeouts are  usually  in  the  subsecond
755       range  to  allow  clients to recover quickly from dropped requests, but
756       this can result in extraneous network traffic and server load.
757
758       However, UDP can be quite effective in specialized settings  where  the
759       networks MTU is large relative to NFSs data transfer size (such as net‐
760       work environments that enable jumbo Ethernet frames).  In such environ‐
761       ments,  trimming  the rsize and wsize settings so that each NFS read or
762       write request fits in just a few network frames (or even in   a  single
763       frame)  is  advised.   This  reduces the probability that the loss of a
764       single MTU-sized network frame results in the loss of an  entire  large
765       read or write request.
766
767       TCP is the default transport protocol used for all modern NFS implemen‐
768       tations.  It performs well in almost every conceivable network environ‐
769       ment  and  provides excellent guarantees against data corruption caused
770       by network unreliability.  TCP is often a requirement  for  mounting  a
771       server through a network firewall.
772
773       Under  normal circumstances, networks drop packets much more frequently
774       than NFS servers drop requests.   As  such,  an  aggressive  retransmit
775       timeout   setting for NFS over TCP is unnecessary. Typical timeout set‐
776       tings for NFS over TCP are between one and  ten  minutes.   After   the
777       client  exhausts  its  retransmits  (the  value  of  the  retrans mount
778       option), it assumes a network partition has occurred, and  attempts  to
779       reconnect  to the server on a fresh socket. Since TCP itself makes net‐
780       work data transfer reliable, rsize and wsize can safely be  allowed  to
781       default  to  the  largest  values  supported by both client and server,
782       independent of the network's MTU size.
783
784   Using the mountproto mount option
785       This section applies only to NFS version 2 and version 3  mounts  since
786       NFS version 4 does not use a separate protocol for mount requests.
787
788       The  Linux  NFS  client can use a different transport for contacting an
789       NFS server's rpcbind service, its mountd service, its Network Lock Man‐
790       ager (NLM) service, and its NFS service.  The exact transports employed
791       by the Linux NFS client for each mount point depends on the settings of
792       the  transport mount options, which include proto, mountproto, udp, and
793       tcp.
794
795       The client sends Network Status Manager (NSM) notifications via UDP  no
796       matter what transport options are specified, but listens for server NSM
797       notifications on both  UDP  and  TCP.   The  NFS  Access  Control  List
798       (NFSACL) protocol shares the same transport as the main NFS service.
799
800       If no transport options are specified, the Linux NFS client uses UDP to
801       contact the server's mountd service, and TCP to contact its NLM and NFS
802       services by default.
803
804       If the server does not support these transports for these services, the
805       mount(8) command attempts to discover what  the  server  supports,  and
806       then  retries  the  mount request once using the discovered transports.
807       If the server does not advertise any transport supported by the  client
808       or  is  misconfigured, the mount request fails.  If the bg option is in
809       effect, the mount command backgrounds itself and continues  to  attempt
810       the specified mount request.
811
812       When  the  proto option, the udp option, or the tcp option is specified
813       but the mountproto option is not, the specified transport  is  used  to
814       contact  both  the server's mountd service and for the NLM and NFS ser‐
815       vices.
816
817       If the mountproto option is specified but none of the proto, udp or tcp
818       options  are  specified,  then  the specified transport is used for the
819       initial mountd request, but the mount command attempts to discover what
820       the server supports for the NFS protocol, preferring TCP if both trans‐
821       ports are supported.
822
823       If both the mountproto and proto (or udp or tcp) options are specified,
824       then  the  transport specified by the mountproto option is used for the
825       initial mountd request, and the transport specified by the proto option
826       (or the udp or tcp options) is used for NFS, no matter what order these
827       options appear.  No automatic service discovery is performed  if  these
828       options are specified.
829
830       If any of the proto, udp, tcp, or mountproto options are specified more
831       than once on the same mount command line, then the value of the  right‐
832       most instance of each of these options takes effect.
833
834   Using NFS over UDP on high-speed links
835       Using NFS over UDP on high-speed links such as Gigabit can cause silent
836       data corruption.
837
838       The problem can be triggered at high loads, and is caused  by  problems
839       in  IP  fragment reassembly. NFS read and writes typically transmit UDP
840       packets of 4 Kilobytes or more, which have to be broken up into several
841       fragments  in  order  to  be  sent over the Ethernet link, which limits
842       packets to 1500 bytes by default. This process happens at the  IP  net‐
843       work layer and is called fragmentation.
844
845       In order to identify fragments that belong together, IP assigns a 16bit
846       IP ID value to each packet;  fragments  generated  from  the  same  UDP
847       packet  will  have  the  same  IP ID. The receiving system will collect
848       these fragments and combine them to form the original UDP packet.  This
849       process is called reassembly. The default timeout for packet reassembly
850       is 30 seconds; if the network stack does not receive all fragments of a
851       given  packet  within this interval, it assumes the missing fragment(s)
852       got lost and discards those it already received.
853
854       The problem this creates over high-speed links is that it  is  possible
855       to  send more than 65536 packets within 30 seconds. In fact, with heavy
856       NFS traffic one can observe that the IP IDs repeat after about  5  sec‐
857       onds.
858
859       This  has  serious  effects  on  reassembly: if one fragment gets lost,
860       another fragment from a different packet but with the same IP  ID  will
861       arrive within the 30 second timeout, and the network stack will combine
862       these fragments to form a new packet. Most of the time, network  layers
863       above  IP  will detect this mismatched reassembly - in the case of UDP,
864       the UDP checksum, which is a 16 bit checksum  over  the  entire  packet
865       payload, will usually not match, and UDP will discard the bad packet.
866
867       However,  the UDP checksum is 16 bit only, so there is a chance of 1 in
868       65536 that it will match even if the packet payload is completely  ran‐
869       dom (which very often isn't the case). If that is the case, silent data
870       corruption will occur.
871
872       This potential should be taken seriously, at least on Gigabit Ethernet.
873       Network  speeds  of  100Mbit/s  should  be considered less problematic,
874       because with most traffic patterns IP ID wrap  around  will  take  much
875       longer than 30 seconds.
876
877       It  is  therefore strongly recommended to use NFS over TCP where possi‐
878       ble, since TCP does not perform fragmentation.
879
880       If you absolutely have to use NFS over UDP over Gigabit Ethernet,  some
881       steps  can  be taken to mitigate the problem and reduce the probability
882       of corruption:
883
884       Jumbo frames:  Many Gigabit network cards are capable  of  transmitting
885                      frames  bigger  than  the 1500 byte limit of traditional
886                      Ethernet, typically 9000 bytes. Using  jumbo  frames  of
887                      9000  bytes will allow you to run NFS over UDP at a page
888                      size of 8K without fragmentation.  Of  course,  this  is
889                      only  feasible  if  all  involved stations support jumbo
890                      frames.
891
892                      To enable a machine to send jumbo frames on  cards  that
893                      support  it, it is sufficient to configure the interface
894                      for a MTU value of 9000.
895
896       Lower reassembly timeout:
897                      By lowering this timeout below the time it takes the  IP
898                      ID counter to wrap around, incorrect reassembly of frag‐
899                      ments can be prevented as well. To do so,  simply  write
900                      the   new   timeout  value  (in  seconds)  to  the  file
901                      /proc/sys/net/ipv4/ipfrag_time.
902
903                      A value of 2 seconds will greatly reduce the probability
904                      of  IPID  clashes  on a single Gigabit link, while still
905                      allowing for a reasonable timeout when  receiving  frag‐
906                      mented traffic from distant peers.
907

DATA AND METADATA COHERENCE

909       Some  modern cluster file systems provide perfect cache coherence among
910       their clients.  Perfect cache coherence among disparate NFS clients  is
911       expensive  to  achieve, especially on wide area networks.  As such, NFS
912       settles for weaker cache coherence that satisfies the  requirements  of
913       most file sharing types.
914
915   Close-to-open cache consistency
916       Typically  file sharing is completely sequential.  First client A opens
917       a file, writes something to it, then closes it.  Then  client  B  opens
918       the same file, and reads the changes.
919
920       When an application opens a file stored on an NFS version 3 server, the
921       NFS client checks that the file exists on the server and  is  permitted
922       to  the  opener by sending a GETATTR or ACCESS request.  The NFS client
923       sends these requests regardless of the freshness of the  file's  cached
924       attributes.
925
926       When  the  application  closes the file, the NFS client writes back any
927       pending changes to the file so  that  the  next  opener  can  view  the
928       changes.  This also gives the NFS client an opportunity to report write
929       errors to the application via the return code from close(2).
930
931       The behavior of checking at open time and flushing  at  close  time  is
932       referred to as close-to-open cache consistency, or CTO.  It can be dis‐
933       abled for an entire mount point using the nocto mount option.
934
935   Weak cache consistency
936       There are still opportunities for a  client's  data  cache  to  contain
937       stale  data.  The NFS version 3 protocol introduced "weak cache consis‐
938       tency" (also known as WCC) which provides a way of efficiently checking
939       a  file's  attributes before and after a single request.  This allows a
940       client to help identify changes that could  have  been  made  by  other
941       clients.
942
943       When  a client is using many concurrent operations that update the same
944       file at the same time (for example, during asynchronous write  behind),
945       it  is  still difficult to tell whether it was that client's updates or
946       some other client's updates that altered the file.
947
948   Attribute caching
949       Use the noac mount option to achieve attribute  cache  coherence  among
950       multiple  clients.   Almost  every  file  system  operation checks file
951       attribute information.  The client keeps this information cached for  a
952       period  of  time  to  reduce  network and server load.  When noac is in
953       effect, a client's file attribute cache is disabled, so each  operation
954       that  needs  to  check  a file's attributes is forced to go back to the
955       server.  This permits a client to see changes to a file  very  quickly,
956       at the cost of many extra network operations.
957
958       Be  careful not to confuse the noac option with "no data caching."  The
959       noac mount option prevents the client from caching file  metadata,  but
960       there are still races that may result in data cache incoherence between
961       client and server.
962
963       The NFS protocol is not designed to support true  cluster  file  system
964       cache  coherence  without  some  type of application serialization.  If
965       absolute cache coherence among clients is required, applications should
966       use file locking. Alternatively, applications can also open their files
967       with the O_DIRECT flag to disable data caching entirely.
968
969   File timestamp maintenance
970       NFS servers are responsible for managing file and directory  timestamps
971       (atime,  ctime,  and  mtime).  When a file is accessed or updated on an
972       NFS server, the file's timestamps are updated just like they  would  be
973       on a filesystem local to an application.
974
975       NFS  clients  cache  file  attributes,  including timestamps.  A file's
976       timestamps are updated on NFS clients when its attributes are retrieved
977       from  the  NFS  server.   Thus there may be some delay before timestamp
978       updates on an NFS server appear to applications on NFS clients.
979
980       To comply with the POSIX filesystem  standard,  the  Linux  NFS  client
981       relies on NFS servers to keep a file's mtime and ctime timestamps prop‐
982       erly up to date.  It does this by flushing local data  changes  to  the
983       server  before reporting mtime to applications via system calls such as
984       stat(2).
985
986       The Linux client handles atime  updates  more  loosely,  however.   NFS
987       clients  maintain good performance by caching data, but that means that
988       application reads, which normally update atime, are  not  reflected  to
989       the server where a file's atime is actually maintained.
990
991       Because of this caching behavior, the Linux NFS client does not support
992       generic atime-related mount options.  See mount(8) for details on these
993       options.
994
995       In particular, the atime/noatime, diratime/nodiratime, relatime/norela‐
996       time, and strictatime/nostrictatime mount options have no effect on NFS
997       mounts.
998
999       /proc/mounts  may  report  that the relatime mount option is set on NFS
1000       mounts, but in fact the atime semantics are always as  described  here,
1001       and are not like relatime semantics.
1002
1003   Directory entry caching
1004       The  Linux NFS client caches the result of all NFS LOOKUP requests.  If
1005       the requested directory entry exists  on  the  server,  the  result  is
1006       referred  to  as  a positive lookup result.  If the requested directory
1007       entry does not exist on  the  server  (that  is,  the  server  returned
1008       ENOENT), the result is referred to as negative lookup result.
1009
1010       To  detect  when  directory  entries  have been added or removed on the
1011       server, the Linux NFS client  watches  a  directory's  mtime.   If  the
1012       client  detects  a  change in a directory's mtime, the client drops all
1013       cached LOOKUP results for that directory.  Since the directory's  mtime
1014       is a cached attribute, it may take some time before a client notices it
1015       has changed.  See the descriptions of the acdirmin, acdirmax, and  noac
1016       mount  options  for more information about how long a directory's mtime
1017       is cached.
1018
1019       Caching directory entries improves the performance of applications that
1020       do  not  share  files with applications on other clients.  Using cached
1021       information about directories can interfere with applications that  run
1022       concurrently  on  multiple  clients  and need to detect the creation or
1023       removal of files quickly, however.  The lookupcache mount option allows
1024       some tuning of directory entry caching behavior.
1025
1026       Before  kernel  release 2.6.28, the Linux NFS client tracked only posi‐
1027       tive lookup results.  This permitted applications to detect new  direc‐
1028       tory  entries  created  by  other clients quickly while still providing
1029       some of the performance benefits of caching.  If an application depends
1030       on  the  previous  lookup caching behavior of the Linux NFS client, you
1031       can use lookupcache=positive.
1032
1033       If the client ignores its cache and validates every application  lookup
1034       request  with the server, that client can immediately detect when a new
1035       directory entry has been either created or removed by  another  client.
1036       You  can  specify  this behavior using lookupcache=none.  The extra NFS
1037       requests needed if the client does  not  cache  directory  entries  can
1038       exact a performance penalty.  Disabling lookup caching should result in
1039       less of a performance penalty than using noac, and has no effect on how
1040       the NFS client caches the attributes of files.
1041
1042   The sync mount option
1043       The NFS client treats the sync mount option differently than some other
1044       file systems (refer to mount(8) for a description of the  generic  sync
1045       and  async  mount options).  If neither sync nor async is specified (or
1046       if the async option is specified), the NFS client delays sending appli‐
1047       cation writes to the server until any of these events occur:
1048
1049              Memory pressure forces reclamation of system memory resources.
1050
1051              An  application  flushes  file  data  explicitly  with  sync(2),
1052              msync(2), or fsync(3).
1053
1054              An application closes a file with close(2).
1055
1056              The file is locked/unlocked via fcntl(2).
1057
1058       In other words, under normal circumstances, data written by an applica‐
1059       tion may not immediately appear on the server that hosts the file.
1060
1061       If  the sync option is specified on a mount point, any system call that
1062       writes data to files on that mount point causes that data to be flushed
1063       to  the  server  before  the system call returns control to user space.
1064       This provides greater data cache coherence among clients, but at a sig‐
1065       nificant performance cost.
1066
1067       Applications  can  use the O_SYNC open flag to force application writes
1068       to individual files to go to the server immediately without the use  of
1069       the sync mount option.
1070
1071   Using file locks with NFS
1072       The  Network Lock Manager protocol is a separate sideband protocol used
1073       to manage file locks in NFS version 2 and version 3.  To  support  lock
1074       recovery after a client or server reboot, a second sideband protocol --
1075       known as the Network Status Manager protocol -- is also  required.   In
1076       NFS  version 4, file locking is supported directly in the main NFS pro‐
1077       tocol, and the NLM and NSM sideband protocols are not used.
1078
1079       In most cases, NLM and NSM services are started automatically,  and  no
1080       extra configuration is required.  Configure all NFS clients with fully-
1081       qualified domain names to ensure that NFS servers can find  clients  to
1082       notify them of server reboots.
1083
1084       NLM supports advisory file locks only.  To lock NFS files, use fcntl(2)
1085       with the F_GETLK and F_SETLK commands.  The NFS  client  converts  file
1086       locks obtained via flock(2) to advisory locks.
1087
1088       When  mounting  servers  that  do not support the NLM protocol, or when
1089       mounting an NFS server through a firewall that blocks the  NLM  service
1090       port,  specify  the  nolock  mount option. NLM locking must be disabled
1091       with the nolock option when using NFS to mount /var because  /var  con‐
1092       tains files used by the NLM implementation on Linux.
1093
1094       Specifying the nolock option may also be advised to improve the perfor‐
1095       mance of a proprietary application which runs on a  single  client  and
1096       uses file locks extensively.
1097
1098   NFS version 4 caching features
1099       The data and metadata caching behavior of NFS version 4 clients is sim‐
1100       ilar to that of earlier versions.  However, NFS version 4 adds two fea‐
1101       tures  that  improve cache behavior: change attributes and file delega‐
1102       tion.
1103
1104       The change attribute is a new part of NFS file and  directory  metadata
1105       which  tracks  data changes.  It replaces the use of a file's modifica‐
1106       tion and change time stamps as a way for clients to validate  the  con‐
1107       tent  of  their  caches.  Change attributes are independent of the time
1108       stamp resolution on either the server or client, however.
1109
1110       A file delegation is a contract between an NFS  version  4  client  and
1111       server  that  allows  the  client  to treat a file temporarily as if no
1112       other client is accessing it.  The server promises to notify the client
1113       (via  a  callback  request)  if  another client attempts to access that
1114       file.  Once a file has been delegated to a client, the client can cache
1115       that  file's  data  and  metadata  aggressively  without contacting the
1116       server.
1117
1118       File delegations come in two flavors: read and write.  A  read  delega‐
1119       tion  means that the server notifies the client about any other clients
1120       that want to write to the file.  A  write  delegation  means  that  the
1121       client gets notified about either read or write accessors.
1122
1123       Servers  grant  file  delegations when a file is opened, and can recall
1124       delegations at any time when another client wants access  to  the  file
1125       that  conflicts  with  any delegations already granted.  Delegations on
1126       directories are not supported.
1127
1128       In order to support delegation callback, the server checks the  network
1129       return  path to the client during the client's initial contact with the
1130       server.  If contact with the client cannot be established,  the  server
1131       simply does not grant any delegations to that client.
1132

SECURITY CONSIDERATIONS

1134       NFS  servers  control access to file data, but they depend on their RPC
1135       implementation to provide authentication of NFS requests.   Traditional
1136       NFS access control mimics the standard mode bit access control provided
1137       in local file systems.  Traditional RPC authentication uses a number to
1138       represent each user (usually the user's own uid), a number to represent
1139       the user's group (the user's gid), and a set  of  up  to  16  auxiliary
1140       group numbers to represent other groups of which the user may be a mem‐
1141       ber.
1142
1143       Typically, file data and user ID values appear  unencrypted  (i.e.  "in
1144       the  clear")  on the network.  Moreover, NFS versions 2 and 3 use sepa‐
1145       rate sideband protocols for mounting, locking and unlocking files,  and
1146       reporting system status of clients and servers.  These auxiliary proto‐
1147       cols use no authentication.
1148
1149       In addition to combining these sideband protocols  with  the  main  NFS
1150       protocol,  NFS  version 4 introduces more advanced forms of access con‐
1151       trol, authentication, and in-transit data protection.  The NFS  version
1152       4 specification mandates support for strong authentication and security
1153       flavors  that  provide  per-RPC  integrity  checking  and   encryption.
1154       Because  NFS  version 4 combines the function of the sideband protocols
1155       into the main NFS protocol, the new security features apply to all  NFS
1156       version  4  operations  including  mounting,  file  locking, and so on.
1157       RPCGSS authentication can also be used with NFS versions 2 and  3,  but
1158       it does not protect their sideband protocols.
1159
1160       The  sec mount option specifies the security flavor used for operations
1161       on behalf of users on that NFS mount point.  Specifying  sec=krb5  pro‐
1162       vides  cryptographic  proof  of  a user's identity in each RPC request.
1163       This provides strong verification of the identity  of  users  accessing
1164       data  on the server.  Note that additional configuration besides adding
1165       this mount option is required in order  to  enable  Kerberos  security.
1166       Refer to the rpc.gssd(8) man page for details.
1167
1168       Two  additional  flavors  of Kerberos security are supported: krb5i and
1169       krb5p.  The krb5i security flavor provides a  cryptographically  strong
1170       guarantee that the data in each RPC request has not been tampered with.
1171       The krb5p security flavor encrypts every RPC request  to  prevent  data
1172       exposure  during  network  transit;  however,  expect  some performance
1173       impact when using integrity checking or  encryption.   Similar  support
1174       for other forms of cryptographic security is also available.
1175
1176   NFS version 4 filesystem crossing
1177       The  NFS version 4 protocol allows a client to renegotiate the security
1178       flavor when the client crosses into a new  filesystem  on  the  server.
1179       The  newly  negotiated flavor effects only accesses of the new filesys‐
1180       tem.
1181
1182       Such negotiation typically occurs when a client crosses from a server's
1183       pseudo-fs into one of the server's exported physical filesystems, which
1184       often have more restrictive security settings than the pseudo-fs.
1185
1186   NFS version 4 Leases
1187       In NFS version 4, a lease is a period during which a server irrevocably
1188       grants  a  client  file  locks.  Once the lease expires, the server may
1189       revoke those locks.  Clients periodically renew their leases to prevent
1190       lock revocation.
1191
1192       After  an  NFS  version  4 server reboots, each client tells the server
1193       about existing file open and lock state under its lease  before  opera‐
1194       tion  can continue.  If a client reboots, the server frees all open and
1195       lock state associated with that client's lease.
1196
1197       When establishing a lease, therefore, a client must identify itself  to
1198       a  server.   Each  client  presents  an arbitrary string to distinguish
1199       itself from other clients.  The client administrator can supplement the
1200       default  identity string using the nfs4.nfs4_unique_id module parameter
1201       to avoid collisions with other client identity strings.
1202
1203       A client also uses a unique  security  flavor  and  principal  when  it
1204       establishes  its  lease.   If  two  clients  present  the same identity
1205       string, a server can use client principals to distinguish between them,
1206       thus  securely  preventing one client from interfering with the other's
1207       lease.
1208
1209       The Linux NFS client establishes  one  lease  on  each  NFS  version  4
1210       server.   Lease  management  operations, such as lease renewal, are not
1211       done on behalf of a particular file, lock, user, or mount point, but on
1212       behalf  of the client that owns that lease.  A client uses a consistent
1213       identity string, security flavor, and principal across  client  reboots
1214       to ensure that the server can promptly reap expired lease state.
1215
1216       When  Kerberos  is  configured  on a Linux NFS client (i.e., there is a
1217       /etc/krb5.keytab on that client), the client attempts to use a Kerberos
1218       security flavor for its lease management operations.  Kerberos provides
1219       secure authentication of each client.  By default, the client uses  the
1220       host/  or  nfs/ service principal in its /etc/krb5.keytab for this pur‐
1221       pose, as described in rpc.gssd(8).
1222
1223       If the client has Kerberos configured, but the server does not,  or  if
1224       the  client does not have a keytab or the requisite service principals,
1225       the client uses AUTH_SYS and UID 0 for lease management.
1226
1227   Using non-privileged source ports
1228       NFS clients usually communicate with NFS servers via  network  sockets.
1229       Each end of a socket is assigned a port value, which is simply a number
1230       between 1 and 65535 that distinguishes socket endpoints at the same  IP
1231       address.   A  socket  is  uniquely defined by a tuple that includes the
1232       transport protocol (TCP or UDP) and the port values and IP addresses of
1233       both endpoints.
1234
1235       The  NFS  client  can choose any source port value for its sockets, but
1236       usually chooses a privileged port.  A privileged port is a  port  value
1237       less  than  1024.   Only  a  process  with root privileges may create a
1238       socket with a privileged source port.
1239
1240       The exact range of privileged source ports that can be chosen is set by
1241       a pair of sysctls to avoid choosing a well-known port, such as the port
1242       used by ssh.  This means the number of source ports available  for  the
1243       NFS  client, and therefore the number of socket connections that can be
1244       used at the same time, is practically limited to only a few hundred.
1245
1246       As described above, the traditional default NFS authentication  scheme,
1247       known as AUTH_SYS, relies on sending local UID and GID numbers to iden‐
1248       tify users making NFS requests.  An NFS server assumes that if  a  con‐
1249       nection  comes  from  a privileged port, the UID and GID numbers in the
1250       NFS requests on this connection have been verified by the client's ker‐
1251       nel  or  some  other local authority.  This is an easy system to spoof,
1252       but on a trusted physical network between trusted hosts, it is entirely
1253       adequate.
1254
1255       Roughly  speaking,  one  socket is used for each NFS mount point.  If a
1256       client could use non-privileged source ports as  well,  the  number  of
1257       sockets  allowed,  and  thus  the  maximum  number  of concurrent mount
1258       points, would be much larger.
1259
1260       Using non-privileged source ports may compromise server security  some‐
1261       what, since any user on AUTH_SYS mount points can now pretend to be any
1262       other when making NFS requests.  Thus NFS servers do not  support  this
1263       by default.  They explicitly allow it usually via an export option.
1264
1265       To  retain  good security while allowing as many mount points as possi‐
1266       ble, it is best to allow non-privileged client connections only if  the
1267       server and client both require strong authentication, such as Kerberos.
1268
1269   Mounting through a firewall
1270       A  firewall  may reside between an NFS client and server, or the client
1271       or server may block some of its own ports via IP filter rules.   It  is
1272       still  possible  to mount an NFS server through a firewall, though some
1273       of the mount(8) command's automatic service endpoint  discovery  mecha‐
1274       nisms  may  not  work;  this  requires you to provide specific endpoint
1275       details via NFS mount options.
1276
1277       NFS servers normally run a portmapper or rpcbind  daemon  to  advertise
1278       their  service  endpoints to clients. Clients use the rpcbind daemon to
1279       determine:
1280
1281              What network port each RPC-based service is using
1282
1283              What transport protocols each RPC-based service supports
1284
1285       The rpcbind daemon uses a well-known port number (111) to help  clients
1286       find  a service endpoint.  Although NFS often uses a standard port num‐
1287       ber (2049), auxiliary services such as the NLM service can  choose  any
1288       unused port number at random.
1289
1290       Common  firewall  configurations block the well-known rpcbind port.  In
1291       the absense of an rpcbind service, the server administrator  fixes  the
1292       port  number  of  NFS-related  services  so that the firewall can allow
1293       access to specific NFS service ports.  Client administrators then spec‐
1294       ify  the  port number for the mountd service via the mount(8) command's
1295       mountport option.  It may also be necessary to enforce the use  of  TCP
1296       or UDP if the firewall blocks one of those transports.
1297
1298   NFS Access Control Lists
1299       Solaris allows NFS version 3 clients direct access to POSIX Access Con‐
1300       trol Lists stored in its local file systems.  This proprietary sideband
1301       protocol,  known  as  NFSACL,  provides richer access control than mode
1302       bits.  Linux  implements  this  protocol  for  compatibility  with  the
1303       Solaris  NFS  implementation.  The NFSACL protocol never became a stan‐
1304       dard part of the NFS version 3 specification, however.
1305
1306       The NFS version 4 specification mandates a new version of  Access  Con‐
1307       trol Lists that are semantically richer than POSIX ACLs.  NFS version 4
1308       ACLs are not fully compatible with POSIX ACLs; as such,  some  transla‐
1309       tion  between  the  two  is required in an environment that mixes POSIX
1310       ACLs and NFS version 4.
1311

THE REMOUNT OPTION

1313       Generic mount options such as rw and sync can be modified on NFS  mount
1314       points  using the remount option.  See mount(8) for more information on
1315       generic mount options.
1316
1317       With few exceptions, NFS-specific options are not able to  be  modified
1318       during  a  remount.   The underlying transport or NFS version cannot be
1319       changed by a remount, for example.
1320
1321       Performing a remount on an NFS file system mounted with the noac option
1322       may  have unintended consequences.  The noac option is a combination of
1323       the generic option sync, and the NFS-specific option actimeo=0.
1324
1325   Unmounting after a remount
1326       For mount points that use NFS versions 2 or 3, the NFS  umount  subcom‐
1327       mand  depends on knowing the original set of mount options used to per‐
1328       form the MNT operation.  These options are stored on disk  by  the  NFS
1329       mount subcommand, and can be erased by a remount.
1330
1331       To ensure that the saved mount options are not erased during a remount,
1332       specify either the local mount directory, or the  server  hostname  and
1333       export pathname, but not both, during a remount.  For example,
1334
1335               mount -o remount,ro /mnt
1336
1337       merges the mount option ro with the mount options already saved on disk
1338       for the NFS server mounted at /mnt.
1339

FILES

1341       /etc/fstab     file system table
1342
1343       /etc/nfsmount.conf
1344                      Configuration file for NFS mounts
1345

NOTES

1347       Before 2.4.7, the Linux NFS client did not support NFS over TCP.
1348
1349       Before 2.4.20, the Linux NFS  client  used  a  heuristic  to  determine
1350       whether cached file data was still valid rather than using the standard
1351       close-to-open cache coherency method described above.
1352
1353       Starting with 2.4.22, the Linux NFS client employs a Van Jacobsen-based
1354       RTT  estimator  to  determine  retransmit timeout values when using NFS
1355       over UDP.
1356
1357       Before 2.6.0, the Linux NFS client did not support NFS version 4.
1358
1359       Before 2.6.8, the Linux NFS client  used  only  synchronous  reads  and
1360       writes when the rsize and wsize settings were smaller than the system's
1361       page size.
1362
1363       The Linux client's support for protocol versions depend on whether  the
1364       kernel  was  built  with  options  CONFIG_NFS_V2,  CONFIG_NFS_V3,  CON‐
1365       FIG_NFS_V4, CONFIG_NFS_V4_1, and CONFIG_NFS_V4_2.
1366

SEE ALSO

1368       fstab(5), mount(8), umount(8), mount.nfs(5), umount.nfs(5), exports(5),
1369       nfsmount.conf(5),   netconfig(5),   ipv6(7),   nfsd(8),   sm-notify(8),
1370       rpc.statd(8), rpc.idmapd(8), rpc.gssd(8), rpc.svcgssd(8), kerberos(1)
1371
1372       RFC 768 for the UDP specification.
1373       RFC 793 for the TCP specification.
1374       RFC 1094 for the NFS version 2 specification.
1375       RFC 1813 for the NFS version 3 specification.
1376       RFC 1832 for the XDR specification.
1377       RFC 1833 for the RPC bind specification.
1378       RFC 2203 for the RPCSEC GSS API protocol specification.
1379       RFC 7530 for the NFS version 4.0 specification.
1380       RFC 5661 for the NFS version 4.1 specification.
1381       RFC 7862 for the NFS version 4.2 specification.
1382
1383
1384
1385                                9 October 2012                          NFS(5)
Impressum