1NFS(5)                        File Formats Manual                       NFS(5)
2
3
4

NAME

6       nfs - fstab format and options for the nfs file systems
7

SYNOPSIS

9       /etc/fstab
10

DESCRIPTION

12       NFS  is  an  Internet  Standard protocol created by Sun Microsystems in
13       1984. NFS was developed to allow file sharing between systems  residing
14       on  a local area network.  Depending on kernel configuration, the Linux
15       NFS client may support NFS versions 2, 3, 4.0, 4.1, or 4.2.
16
17       The mount(8) command attaches a file system to the system's name  space
18       hierarchy  at  a  given mount point.  The /etc/fstab file describes how
19       mount(8) should assemble a system's file name  hierarchy  from  various
20       independent  file  systems  (including  file  systems  exported  by NFS
21       servers).  Each line in the /etc/fstab file  describes  a  single  file
22       system,  its  mount  point, and a set of default mount options for that
23       mount point.
24
25       For NFS file system mounts, a line in the /etc/fstab file specifies the
26       server  name,  the path name of the exported server directory to mount,
27       the local directory that is the mount point, the type  of  file  system
28       that is being mounted, and a list of mount options that control the way
29       the filesystem is mounted and how the NFS client behaves when accessing
30       files on this mount point.  The fifth and sixth fields on each line are
31       not used by NFS, thus conventionally each contain the digit  zero.  For
32       example:
33
34               server:path   /mountpoint   fstype   option,option,...   0 0
35
36       The  server's  hostname  and  export pathname are separated by a colon,
37       while the mount options are separated by commas. The  remaining  fields
38       are separated by blanks or tabs.
39
40       The server's hostname can be an unqualified hostname, a fully qualified
41       domain name, a dotted quad IPv4 address, or an IPv6 address enclosed in
42       square  brackets.   Link-local  and  site-local  IPv6 addresses must be
43       accompanied by an interface identifier.  See  ipv6(7)  for  details  on
44       specifying raw IPv6 addresses.
45
46       The  fstype  field  contains  "nfs".   Use  of  the  "nfs4"  fstype  in
47       /etc/fstab is deprecated.
48

MOUNT OPTIONS

50       Refer to mount(8) for a description of generic mount options  available
51       for  all file systems. If you do not need to specify any mount options,
52       use the generic option defaults in /etc/fstab.
53
54   Options supported by all versions
55       These options are valid to use with any NFS version.
56
57       nfsvers=n      The NFS protocol version  number  used  to  contact  the
58                      server's  NFS  service.   If the server does not support
59                      the requested version, the mount request fails.  If this
60                      option  is  not  specified, the client tries version 4.1
61                      first, then negotiates down until  it  finds  a  version
62                      supported by the server.
63
64       vers=n         This option is an alternative to the nfsvers option.  It
65                      is included for compatibility with other operating  sys‐
66                      tems
67
68       soft / hard    Determines the recovery behavior of the NFS client after
69                      an NFS request times out.  If neither option  is  speci‐
70                      fied  (or if the hard option is specified), NFS requests
71                      are retried indefinitely.  If the soft option is  speci‐
72                      fied,  then  the  NFS  client fails an NFS request after
73                      retrans retransmissions have been sent, causing the  NFS
74                      client to return an error to the calling application.
75
76                      NB:  A  so-called  "soft"  timeout can cause silent data
77                      corruption in certain  cases.  As  such,  use  the  soft
78                      option only when client responsiveness is more important
79                      than data integrity.  Using NFS over TCP  or  increasing
80                      the value of the retrans option may mitigate some of the
81                      risks of using the soft option.
82
83       intr / nointr  This option is provided for backward compatibility.   It
84                      is ignored after kernel 2.6.25.
85
86       timeo=n        The  time  in  deciseconds  (tenths of a second) the NFS
87                      client waits for a response before  it  retries  an  NFS
88                      request.
89
90                      For NFS over TCP the default timeo value is 600 (60 sec‐
91                      onds).  The NFS client performs  linear  backoff:  After
92                      each retransmission the timeout is increased by timeo up
93                      to the maximum of 600 seconds.
94
95                      However, for NFS over UDP, the client uses  an  adaptive
96                      algorithm  to  estimate an appropriate timeout value for
97                      frequently used request types (such as  READ  and  WRITE
98                      requests),  but  uses the timeo setting for infrequently
99                      used request types (such as FSINFO  requests).   If  the
100                      timeo option is not specified, infrequently used request
101                      types  are  retried  after  1.1  seconds.   After   each
102                      retransmission,  the  NFS client doubles the timeout for
103                      that request, up to a maximum timeout length of 60  sec‐
104                      onds.
105
106       retrans=n      The  number  of  times  the NFS client retries a request
107                      before it  attempts  further  recovery  action.  If  the
108                      retrans  option  is  not specified, the NFS client tries
109                      each request three times.
110
111                      The NFS client generates a "server not responding"  mes‐
112                      sage after retrans retries, then attempts further recov‐
113                      ery (depending on whether the hard mount  option  is  in
114                      effect).
115
116       rsize=n        The maximum number of bytes in each network READ request
117                      that the NFS client can receive when reading data from a
118                      file  on an NFS server.  The actual data payload size of
119                      each NFS READ request is equal to or  smaller  than  the
120                      rsize setting. The largest read payload supported by the
121                      Linux NFS client is 1,048,576 bytes (one megabyte).
122
123                      The rsize value is a positive integral multiple of 1024.
124                      Specified rsize values lower than 1024 are replaced with
125                      4096; values  larger  than  1048576  are  replaced  with
126                      1048576.  If  a  specified value is within the supported
127                      range but not a multiple of 1024, it is rounded down  to
128                      the nearest multiple of 1024.
129
130                      If  an rsize value is not specified, or if the specified
131                      rsize value is  larger  than  the  maximum  that  either
132                      client  or  server  can  support,  the client and server
133                      negotiate the largest rsize value  that  they  can  both
134                      support.
135
136                      The rsize mount option as specified on the mount(8) com‐
137                      mand line appears in the /etc/mtab  file.  However,  the
138                      effective  rsize  value  negotiated  by  the  client and
139                      server is reported in the /proc/mounts file.
140
141       wsize=n        The maximum number of bytes per  network  WRITE  request
142                      that the NFS client can send when writing data to a file
143                      on an NFS server. The actual data payload size  of  each
144                      NFS  WRITE request is equal to or smaller than the wsize
145                      setting. The largest  write  payload  supported  by  the
146                      Linux NFS client is 1,048,576 bytes (one megabyte).
147
148                      Similar  to  rsize , the wsize value is a positive inte‐
149                      gral multiple of 1024.   Specified  wsize  values  lower
150                      than  1024  are  replaced  with 4096; values larger than
151                      1048576 are replaced with 1048576. If a specified  value
152                      is  within  the  supported  range  but not a multiple of
153                      1024, it is rounded down  to  the  nearest  multiple  of
154                      1024.
155
156                      If  a  wsize value is not specified, or if the specified
157                      wsize value is  larger  than  the  maximum  that  either
158                      client  or  server  can  support,  the client and server
159                      negotiate the largest wsize value  that  they  can  both
160                      support.
161
162                      The wsize mount option as specified on the mount(8) com‐
163                      mand line appears in the /etc/mtab  file.  However,  the
164                      effective  wsize  value  negotiated  by  the  client and
165                      server is reported in the /proc/mounts file.
166
167       ac / noac      Selects whether the client may cache file attributes. If
168                      neither option is specified (or if ac is specified), the
169                      client caches file attributes.
170
171                      To  improve  performance,   NFS   clients   cache   file
172                      attributes.  Every few seconds, an NFS client checks the
173                      server's version of each file's attributes for  updates.
174                      Changes  that  occur on the server in those small inter‐
175                      vals remain  undetected  until  the  client  checks  the
176                      server  again.  The  noac  option  prevents clients from
177                      caching file attributes so that  applications  can  more
178                      quickly detect file changes on the server.
179
180                      In  addition  to preventing the client from caching file
181                      attributes, the noac option forces application writes to
182                      become  synchronous  so  that  local  changes  to a file
183                      become visible on the  server  immediately.   That  way,
184                      other clients can quickly detect recent writes when they
185                      check the file's attributes.
186
187                      Using the noac option provides greater  cache  coherence
188                      among  NFS  clients  accessing  the  same  files, but it
189                      extracts a significant performance  penalty.   As  such,
190                      judicious  use  of  file  locking is encouraged instead.
191                      The DATA  AND  METADATA  COHERENCE  section  contains  a
192                      detailed discussion of these trade-offs.
193
194       acregmin=n     The minimum time (in seconds) that the NFS client caches
195                      attributes of a regular file before  it  requests  fresh
196                      attribute  information from a server.  If this option is
197                      not specified, the NFS client uses a  3-second  minimum.
198                      See  the  DATA AND METADATA COHERENCE section for a full
199                      discussion of attribute caching.
200
201       acregmax=n     The maximum time (in seconds) that the NFS client caches
202                      attributes  of  a  regular file before it requests fresh
203                      attribute information from a server.  If this option  is
204                      not  specified, the NFS client uses a 60-second maximum.
205                      See the DATA AND METADATA COHERENCE section for  a  full
206                      discussion of attribute caching.
207
208       acdirmin=n     The minimum time (in seconds) that the NFS client caches
209                      attributes of  a  directory  before  it  requests  fresh
210                      attribute  information from a server.  If this option is
211                      not specified, the NFS client uses a 30-second  minimum.
212                      See  the  DATA AND METADATA COHERENCE section for a full
213                      discussion of attribute caching.
214
215       acdirmax=n     The maximum time (in seconds) that the NFS client caches
216                      attributes  of  a  directory  before  it  requests fresh
217                      attribute information from a server.  If this option  is
218                      not  specified, the NFS client uses a 60-second maximum.
219                      See the DATA AND METADATA COHERENCE section for  a  full
220                      discussion of attribute caching.
221
222       actimeo=n      Using  actimeo sets all of acregmin, acregmax, acdirmin,
223                      and acdirmax to the same value.  If this option  is  not
224                      specified,  the NFS client uses the defaults for each of
225                      these options listed above.
226
227       bg / fg        Determines  how  the  mount(8)  command  behaves  if  an
228                      attempt  to mount an export fails.  The fg option causes
229                      mount(8) to exit with an error status if any part of the
230                      mount  request  times  out  or  fails outright.  This is
231                      called a "foreground" mount, and is the default behavior
232                      if neither the fg nor bg mount option is specified.
233
234                      If  the  bg  option  is  specified, a timeout or failure
235                      causes the mount(8) command to fork a child  which  con‐
236                      tinues to attempt to mount the export.  The parent imme‐
237                      diately returns with a zero exit code.  This is known as
238                      a "background" mount.
239
240                      If  the  local  mount  point  directory  is missing, the
241                      mount(8) command acts as if the mount request timed out.
242                      This  permits  nested NFS mounts specified in /etc/fstab
243                      to proceed in any order  during  system  initialization,
244                      even  if some NFS servers are not yet available.  Alter‐
245                      natively these issues can be addressed  using  an  auto‐
246                      mounter (refer to automount(8) for details).
247
248       rdirplus / nordirplus
249                      Selects   whether  to  use  NFS  v3  or  v4  READDIRPLUS
250                      requests.  If this option  is  not  specified,  the  NFS
251                      client  uses READDIRPLUS requests on NFS v3 or v4 mounts
252                      to read small directories.   Some  applications  perform
253                      better  if the client uses only READDIR requests for all
254                      directories.
255
256       retry=n        The number of minutes that the mount(8) command  retries
257                      an  NFS  mount operation in the foreground or background
258                      before giving up.  If this option is not specified,  the
259                      default  value  for  foreground mounts is 2 minutes, and
260                      the default value for background mounts is 10000 minutes
261                      (80  minutes  shy  of  one week).  If a value of zero is
262                      specified, the mount(8) command exits immediately  after
263                      the first failure.
264
265       sec=flavors    A  colon-separated  list of one or more security flavors
266                      to use for accessing files on the mounted export. If the
267                      server  does not support any of these flavors, the mount
268                      operation fails.  If sec= is not specified,  the  client
269                      attempts  to find a security flavor that both the client
270                      and the server supports.  Valid flavors are  none,  sys,
271                      krb5, krb5i, and krb5p.  Refer to the SECURITY CONSIDER‐
272                      ATIONS section for details.
273
274       sharecache / nosharecache
275                      Determines how the client's  data  cache  and  attribute
276                      cache are shared when mounting the same export more than
277                      once concurrently.  Using the same cache reduces  memory
278                      requirements  on  the client and presents identical file
279                      contents to applications when the same  remote  file  is
280                      accessed via different mount points.
281
282                      If  neither  option  is  specified, or if the sharecache
283                      option is specified, then a single cache is used for all
284                      mount  points  that  access  the  same  export.   If the
285                      nosharecache option is specified, then that mount  point
286                      gets  a unique cache.  Note that when data and attribute
287                      caches are shared, the  mount  options  from  the  first
288                      mount point take effect for subsequent concurrent mounts
289                      of the same export.
290
291                      As of kernel 2.6.18, the behavior specified by  noshare‐
292                      cache  is  legacy caching behavior. This is considered a
293                      data risk since multiple cached copies of the same  file
294                      on  the  same  client can become out of sync following a
295                      local update of one of the copies.
296
297       resvport / noresvport
298                      Specifies whether the NFS client should use a privileged
299                      source  port  when  communicating with an NFS server for
300                      this mount point.  If this option is not  specified,  or
301                      the  resvport option is specified, the NFS client uses a
302                      privileged source port.  If  the  noresvport  option  is
303                      specified,  the  NFS client uses a non-privileged source
304                      port.  This option is supported in  kernels  2.6.28  and
305                      later.
306
307                      Using  non-privileged  source  ports  helps increase the
308                      maximum number of NFS mount points allowed on a  client,
309                      but  NFS  servers must be configured to allow clients to
310                      connect via non-privileged source ports.
311
312                      Refer to the SECURITY CONSIDERATIONS section for  impor‐
313                      tant details.
314
315       lookupcache=mode
316                      Specifies  how the kernel manages its cache of directory
317                      entries for a given mount point.  mode  can  be  one  of
318                      all,  none,  pos, or positive.  This option is supported
319                      in kernels 2.6.28 and later.
320
321                      The Linux NFS client caches the result of all NFS LOOKUP
322                      requests.   If  the  requested directory entry exists on
323                      the server, the result is referred to as  positive.   If
324                      the  requested  directory  entry  does  not exist on the
325                      server, the result is referred to as negative.
326
327                      If this option is not specified, or if all is specified,
328                      the client assumes both types of directory cache entries
329                      are  valid  until  their   parent   directory's   cached
330                      attributes expire.
331
332                      If pos or positive is specified, the client assumes pos‐
333                      itive entries are valid until their  parent  directory's
334                      cached  attributes  expire, but always revalidates nega‐
335                      tive entires before an application can use them.
336
337                      If none is specified, the client revalidates both  types
338                      of directory cache entries before an application can use
339                      them.  This permits quick detection of files  that  were
340                      created  or  removed  by  other  clients, but can impact
341                      application and server performance.
342
343                      The DATA  AND  METADATA  COHERENCE  section  contains  a
344                      detailed discussion of these trade-offs.
345
346       fsc / nofsc    Enable/Disables  the  cache of (read-only) data pages to
347                      the  local  disk  using  the  FS-Cache   facility.   See
348                      cachefilesd(8)       and      <kernel_soruce>/Documenta‐
349                      tion/filesystems/caching for detail on how to  configure
350                      the FS-Cache facility.  Default value is nofsc.
351
352   Options for NFS versions 2 and 3 only
353       Use  these options, along with the options in the above subsection, for
354       NFS versions 2 and 3 only.
355
356       proto=netid    The netid determines the transport that is used to  com‐
357                      municate  with  the  NFS  server.  Available options are
358                      udp, udp6, tcp, tcp6, and rdma.  Those which  end  in  6
359                      use IPv6 addresses and are only available if support for
360                      TI-RPC is built in. Others use IPv4 addresses.
361
362                      Each transport protocol uses different  default  retrans
363                      and  timeo  settings.  Refer to the description of these
364                      two mount options for details.
365
366                      In addition to controlling how the NFS client  transmits
367                      requests  to the server, this mount option also controls
368                      how the mount(8) command communicates with the  server's
369                      rpcbind  and  mountd  services.  Specifying a netid that
370                      uses TCP forces all traffic from  the  mount(8)  command
371                      and  the NFS client to use TCP.  Specifying a netid that
372                      uses UDP forces all traffic types to use UDP.
373
374                      Before using NFS over UDP, refer to the TRANSPORT  METH‐
375                      ODS section.
376
377                      If the proto mount option is not specified, the mount(8)
378                      command discovers which protocols  the  server  supports
379                      and  chooses  an appropriate transport for each service.
380                      Refer to the TRANSPORT METHODS section for more details.
381
382       udp            The  udp  option  is  an   alternative   to   specifying
383                      proto=udp.   It is included for compatibility with other
384                      operating systems.
385
386                      Before using NFS over UDP, refer to the TRANSPORT  METH‐
387                      ODS section.
388
389       tcp            The   tcp   option   is  an  alternative  to  specifying
390                      proto=tcp.  It is included for compatibility with  other
391                      operating systems.
392
393       rdma           The   rdma   option  is  an  alternative  to  specifying
394                      proto=rdma.
395
396       port=n         The numeric value of the server's NFS service port.   If
397                      the  server's NFS service is not available on the speci‐
398                      fied port, the mount request fails.
399
400                      If this option is not specified,  or  if  the  specified
401                      port  value  is 0, then the NFS client uses the NFS ser‐
402                      vice port number advertised by the server's rpcbind ser‐
403                      vice.   The  mount request fails if the server's rpcbind
404                      service is not available, the server's  NFS  service  is
405                      not registered with its rpcbind service, or the server's
406                      NFS service is not available on the advertised port.
407
408       mountport=n    The numeric value of the server's mountd port.   If  the
409                      server's  mountd  service is not available on the speci‐
410                      fied port, the mount request fails.
411
412                      If this option is not specified,  or  if  the  specified
413                      port  value  is  0,  then  the mount(8) command uses the
414                      mountd service port number advertised  by  the  server's
415                      rpcbind   service.   The  mount  request  fails  if  the
416                      server's rpcbind service is not available, the  server's
417                      mountd  service  is not registered with its rpcbind ser‐
418                      vice, or the server's mountd service is not available on
419                      the advertised port.
420
421                      This  option  can  be  used  when mounting an NFS server
422                      through a firewall that blocks the rpcbind protocol.
423
424       mountproto=netid
425                      The transport the NFS client uses to  transmit  requests
426                      to  the NFS server's mountd service when performing this
427                      mount request, and  when  later  unmounting  this  mount
428                      point.
429
430                      netid  may be one of udp, and tcp which use IPv4 address
431                      or, if TI-RPC is built into the mount.nfs command, udp6,
432                      and tcp6 which use IPv6 addresses.
433
434                      This  option  can  be  used  when mounting an NFS server
435                      through a firewall that blocks a  particular  transport.
436                      When  used in combination with the proto option, differ‐
437                      ent transports for mountd requests and NFS requests  can
438                      be  specified.   If  the  server's mountd service is not
439                      available via the specified transport, the mount request
440                      fails.
441
442                      Refer  to  the TRANSPORT METHODS section for more on how
443                      the mountproto mount option  interacts  with  the  proto
444                      mount option.
445
446       mounthost=name The hostname of the host running mountd.  If this option
447                      is not specified, the mount(8) command assumes that  the
448                      mountd service runs on the same host as the NFS service.
449
450       mountvers=n    The  RPC  version  number  used  to contact the server's
451                      mountd.  If this option is  not  specified,  the  client
452                      uses  a  version number appropriate to the requested NFS
453                      version.  This option is useful when multiple  NFS  ser‐
454                      vices are running on the same remote server host.
455
456       namlen=n       The  maximum  length  of  a  pathname  component on this
457                      mount.  If this option is  not  specified,  the  maximum
458                      length  is  negotiated  with  the server. In most cases,
459                      this maximum length is 255 characters.
460
461                      Some early versions of NFS did not support this negotia‐
462                      tion.    Using  this  option  ensures  that  pathconf(3)
463                      reports the proper maximum component length to  applica‐
464                      tions in such cases.
465
466       lock / nolock  Selects whether to use the NLM sideband protocol to lock
467                      files on the server.  If neither option is specified (or
468                      if  lock  is  specified),  NLM  locking is used for this
469                      mount point.  When using the nolock option, applications
470                      can  lock  files,  but such locks provide exclusion only
471                      against other applications running on the  same  client.
472                      Remote applications are not affected by these locks.
473
474                      NLM locking must be disabled with the nolock option when
475                      using NFS to mount /var because /var contains files used
476                      by  the  NLM  implementation on Linux.  Using the nolock
477                      option is also required when  mounting  exports  on  NFS
478                      servers that do not support the NLM protocol.
479
480       cto / nocto    Selects  whether  to  use  close-to-open cache coherence
481                      semantics.  If neither option is specified (or if cto is
482                      specified),  the  client uses close-to-open cache coher‐
483                      ence semantics. If the nocto option  is  specified,  the
484                      client  uses  a non-standard heuristic to determine when
485                      files on the server have changed.
486
487                      Using the nocto option may improve performance for read-
488                      only  mounts, but should be used only if the data on the
489                      server changes only occasionally.  The DATA AND METADATA
490                      COHERENCE  section discusses the behavior of this option
491                      in more detail.
492
493       acl / noacl    Selects whether to use the NFSACL sideband  protocol  on
494                      this  mount  point.   The  NFSACL sideband protocol is a
495                      proprietary protocol implemented in Solaris that manages
496                      Access  Control  Lists. NFSACL was never made a standard
497                      part of the NFS protocol specification.
498
499                      If neither acl nor noacl option is  specified,  the  NFS
500                      client  negotiates  with the server to see if the NFSACL
501                      protocol is supported, and uses it if  the  server  sup‐
502                      ports it.  Disabling the NFSACL sideband protocol may be
503                      necessary if the  negotiation  causes  problems  on  the
504                      client  or server.  Refer to the SECURITY CONSIDERATIONS
505                      section for more details.
506
507       local_lock=mechanism
508                      Specifies whether to use local locking for any  or  both
509                      of  the  flock and the POSIX locking mechanisms.  mecha‐
510                      nism can be one of all, flock,  posix,  or  none.   This
511                      option is supported in kernels 2.6.37 and later.
512
513                      The Linux NFS client provides a way to make locks local.
514                      This means, the applications can lock  files,  but  such
515                      locks  provide exclusion only against other applications
516                      running on the same client. Remote applications are  not
517                      affected by these locks.
518
519                      If  this  option  is not specified, or if none is speci‐
520                      fied, the client assumes that the locks are not local.
521
522                      If all is specified, the client assumes that both  flock
523                      and POSIX locks are local.
524
525                      If  flock  is  specified,  the  client assumes that only
526                      flock locks are local and uses NLM sideband protocol  to
527                      lock files when POSIX locks are used.
528
529                      If  posix  is  specified,  the client assumes that POSIX
530                      locks are local and uses NLM sideband protocol  to  lock
531                      files when flock locks are used.
532
533                      To  support legacy flock behavior similar to that of NFS
534                      clients < 2.6.12, use 'local_lock=flock'. This option is
535                      required  when  exporting  NFS mounts via Samba as Samba
536                      maps Windows  share  mode  locks  as  flock.  Since  NFS
537                      clients  >  2.6.12  implement  flock  by emulating POSIX
538                      locks, this will result in conflicting locks.
539
540                      NOTE: When used together, the 'local_lock' mount  option
541                      will be overridden by 'nolock'/'lock' mount option.
542
543   Options for NFS version 4 only
544       Use  these  options,  along  with  the  options in the first subsection
545       above, for NFS version 4.0 and newer.
546
547       proto=netid    The netid determines the transport that is used to  com‐
548                      municate  with  the  NFS  server.  Supported options are
549                      tcp, tcp6, and rdma.  tcp6 use  IPv6  addresses  and  is
550                      only  available  if support for TI-RPC is built in. Both
551                      others use IPv4 addresses.
552
553                      All NFS version 4 servers are required to  support  TCP,
554                      so  if  this mount option is not specified, the NFS ver‐
555                      sion 4 client uses  the  TCP  protocol.   Refer  to  the
556                      TRANSPORT METHODS section for more details.
557
558       minorversion=n Specifies  the  protocol  minor  version  number.  NFSv4
559                      introduces  "minor  versioning,"  where   NFS   protocol
560                      enhancements  can  be introduced without bumping the NFS
561                      protocol version  number.   Before  kernel  2.6.38,  the
562                      minor  version  is  always  zero, and this option is not
563                      recognized.  After this  kernel,  specifying  "minorver‐
564                      sion=1"  enables  a number of advanced features, such as
565                      NFSv4 sessions.
566
567                      Recent kernels allow the minor version to  be  specified
568                      using   the   vers=  option.   For  example,  specifying
569                      vers=4.1 is  the  same  as  specifying  vers=4,minorver‐
570                      sion=1.
571
572       port=n         The  numeric value of the server's NFS service port.  If
573                      the server's NFS service is not available on the  speci‐
574                      fied port, the mount request fails.
575
576                      If  this  mount  option is not specified, the NFS client
577                      uses the standard NFS port number of 2049 without  first
578                      checking  the  server's rpcbind service.  This allows an
579                      NFS version 4 client to contact an NFS version 4  server
580                      through a firewall that may block rpcbind requests.
581
582                      If  the  specified  port value is 0, then the NFS client
583                      uses the NFS  service  port  number  advertised  by  the
584                      server's  rpcbind  service.   The mount request fails if
585                      the server's  rpcbind  service  is  not  available,  the
586                      server's  NFS service is not registered with its rpcbind
587                      service, or the server's NFS service is not available on
588                      the advertised port.
589
590       cto / nocto    Selects  whether  to  use  close-to-open cache coherence
591                      semantics for NFS directories on this mount  point.   If
592                      neither  cto  nor  nocto is specified, the default is to
593                      use close-to-open cache coherence semantics for directo‐
594                      ries.
595
596                      File  data  caching  behavior  is  not  affected by this
597                      option.  The DATA AND METADATA  COHERENCE  section  dis‐
598                      cusses the behavior of this option in more detail.
599
600       clientaddr=n.n.n.n
601
602       clientaddr=n:n:...:n
603                      Specifies  a  single IPv4 address (in dotted-quad form),
604                      or a non-link-local IPv6 address, that  the  NFS  client
605                      advertises  to  allow servers to perform NFS version 4.0
606                      callback requests against files on this mount point.  If
607                      the   server is unable to establish callback connections
608                      to clients, performance  may  degrade,  or  accesses  to
609                      files  may  temporarily  hang.   Can  specify a value of
610                      IPv4_ANY (0.0.0.0) or equivalent IPv6 any address  which
611                      will  signal to the NFS server that this NFS client does
612                      not want delegations.
613
614                      If this option is not specified,  the  mount(8)  command
615                      attempts  to  discover  an  appropriate callback address
616                      automatically.  The automatic discovery process  is  not
617                      perfect,  however.   In  the presence of multiple client
618                      network interfaces, special routing policies, or  atypi‐
619                      cal  network  topologies,  the  exact address to use for
620                      callbacks may be nontrivial to determine.
621
622                      NFS protocol versions 4.1 and 4.2 use the  client-estab‐
623                      lished  TCP  connection for callback requests, so do not
624                      require the server  to  connect  to  the  client.   This
625                      option is therefore only affect NFS version 4.0 mounts.
626
627       migration / nomigration
628                      Selects whether the client uses an identification string
629                      that is compatible with NFSv4 Transparent  State  Migra‐
630                      tion (TSM).  If the mounted server supports NFSv4 migra‐
631                      tion with TSM, specify the migration option.
632
633                      Some server features misbehave in the face of  a  migra‐
634                      tion-compatible  identification string.  The nomigration
635                      option retains the use of a traditional client  indenti‐
636                      fication  string  which  is  compatible  with legacy NFS
637                      servers.  This is also the behavior if neither option is
638                      specified.   A  client's  open  and lock state cannot be
639                      migrated transparently when it identifies itself  via  a
640                      traditional identification string.
641
642                      This  mount  option  has no effect with NFSv4 minor ver‐
643                      sions newer than zero, which always  use  TSM-compatible
644                      client identification strings.
645

nfs4 FILE SYSTEM TYPE

647       The  nfs4 file system type is an old syntax for specifying NFSv4 usage.
648       It can still be  used  with  all  NFSv4-specific  and  common  options,
649       excepted the nfsvers mount option.
650

MOUNT CONFIGURATION FILE

652       If  the  mount command is configured to do so, all of the mount options
653       described in the  previous  section  can  also  be  configured  in  the
654       /etc/nfsmount.conf file. See nfsmount.conf(5) for details.
655

EXAMPLES

657       To  mount  an  export using NFS version 2, use the nfs file system type
658       and specify the nfsvers=2 mount option.  To mount using NFS version  3,
659       use  the  nfs  file system type and specify the nfsvers=3 mount option.
660       To mount using NFS version 4, use either the nfs file system type, with
661       the nfsvers=4 mount option, or the nfs4 file system type.
662
663       The  following example from an /etc/fstab file causes the mount command
664       to negotiate reasonable defaults for NFS behavior.
665
666               server:/export  /mnt  nfs   defaults                      0 0
667
668       Here is an example from an /etc/fstab file for an NFS version  2  mount
669       over UDP.
670
671               server:/export  /mnt  nfs   nfsvers=2,proto=udp           0 0
672
673       This  example shows how to mount using NFS version 4 over TCP with Ker‐
674       beros 5 mutual authentication.
675
676               server:/export  /mnt  nfs4  sec=krb5                      0 0
677
678       This example shows how to mount using NFS version 4 over TCP with  Ker‐
679       beros 5 privacy or data integrity mode.
680
681               server:/export  /mnt  nfs4  sec=krb5p:krb5i               0 0
682
683       This example can be used to mount /usr over NFS.
684
685               server:/export  /usr  nfs   ro,nolock,nocto,actimeo=3600  0 0
686
687       This  example  shows  how to mount an NFS server using a raw IPv6 link-
688       local address.
689
690               [fe80::215:c5ff:fb3e:e2b1%eth0]:/export /mnt nfs defaults 0 0
691

TRANSPORT METHODS

693       NFS clients send requests to NFS servers via Remote Procedure Calls, or
694       RPCs.  The RPC client discovers remote service endpoints automatically,
695       handles per-request authentication, adjusts request parameters for dif‐
696       ferent  byte  endianness on client and server, and retransmits requests
697       that may have been lost by the network or  server.   RPC  requests  and
698       replies flow over a network transport.
699
700       In  most  cases,  the  mount(8) command, NFS client, and NFS server can
701       automatically negotiate proper transport and data  transfer  size  set‐
702       tings  for  a  mount point.  In some cases, however, it pays to specify
703       these settings explicitly using mount options.
704
705       Traditionally, NFS clients  used  the  UDP  transport  exclusively  for
706       transmitting requests to servers.  Though its implementation is simple,
707       NFS over UDP has many limitations that  prevent  smooth  operation  and
708       good  performance  in  some  common  deployment  environments.  Even an
709       insignificant packet loss  rate  results  in  the  loss  of  whole  NFS
710       requests;  as  such,  retransmit  timeouts are usually in the subsecond
711       range to allow clients to recover quickly from  dropped  requests,  but
712       this can result in extraneous network traffic and server load.
713
714       However,  UDP  can be quite effective in specialized settings where the
715       networks MTU is large relative to NFSs data transfer size (such as net‐
716       work environments that enable jumbo Ethernet frames).  In such environ‐
717       ments, trimming the rsize and wsize settings so that each NFS  read  or
718       write  request  fits in just a few network frames (or even in  a single
719       frame) is advised.  This reduces the probability that  the  loss  of  a
720       single  MTU-sized  network frame results in the loss of an entire large
721       read or write request.
722
723       TCP is the default transport protocol used for all modern NFS implemen‐
724       tations.  It performs well in almost every conceivable network environ‐
725       ment and provides excellent guarantees against data  corruption  caused
726       by  network  unreliability.   TCP is often a requirement for mounting a
727       server through a network firewall.
728
729       Under normal circumstances, networks drop packets much more  frequently
730       than  NFS  servers  drop  requests.   As such, an aggressive retransmit
731       timeout  setting for NFS over TCP is unnecessary. Typical timeout  set‐
732       tings  for  NFS  over  TCP are between one and ten minutes.  After  the
733       client exhausts  its  retransmits  (the  value  of  the  retrans  mount
734       option),  it  assumes a network partition has occurred, and attempts to
735       reconnect to the server on a fresh socket. Since TCP itself makes  net‐
736       work  data  transfer reliable, rsize and wsize can safely be allowed to
737       default to the largest values supported  by  both  client  and  server,
738       independent of the network's MTU size.
739
740   Using the mountproto mount option
741       This  section  applies only to NFS version 2 and version 3 mounts since
742       NFS version 4 does not use a separate protocol for mount requests.
743
744       The Linux NFS client can use a different transport  for  contacting  an
745       NFS server's rpcbind service, its mountd service, its Network Lock Man‐
746       ager (NLM) service, and its NFS service.  The exact transports employed
747       by the Linux NFS client for each mount point depends on the settings of
748       the transport mount options, which include proto, mountproto, udp,  and
749       tcp.
750
751       The  client sends Network Status Manager (NSM) notifications via UDP no
752       matter what transport options are specified, but listens for server NSM
753       notifications  on  both  UDP  and  TCP.   The  NFS  Access Control List
754       (NFSACL) protocol shares the same transport as the main NFS service.
755
756       If no transport options are specified, the Linux NFS client uses UDP to
757       contact the server's mountd service, and TCP to contact its NLM and NFS
758       services by default.
759
760       If the server does not support these transports for these services, the
761       mount(8)  command  attempts  to  discover what the server supports, and
762       then retries the mount request once using  the  discovered  transports.
763       If  the server does not advertise any transport supported by the client
764       or is misconfigured, the mount request fails.  If the bg option  is  in
765       effect,  the  mount command backgrounds itself and continues to attempt
766       the specified mount request.
767
768       When the proto option, the udp option, or the tcp option  is  specified
769       but  the  mountproto  option is not, the specified transport is used to
770       contact both the server's mountd service and for the NLM and  NFS  ser‐
771       vices.
772
773       If the mountproto option is specified but none of the proto, udp or tcp
774       options are specified, then the specified transport  is  used  for  the
775       initial mountd request, but the mount command attempts to discover what
776       the server supports for the NFS protocol, preferring TCP if both trans‐
777       ports are supported.
778
779       If both the mountproto and proto (or udp or tcp) options are specified,
780       then the transport specified by the mountproto option is used  for  the
781       initial mountd request, and the transport specified by the proto option
782       (or the udp or tcp options) is used for NFS, no matter what order these
783       options  appear.   No automatic service discovery is performed if these
784       options are specified.
785
786       If any of the proto, udp, tcp, or mountproto options are specified more
787       than  once on the same mount command line, then the value of the right‐
788       most instance of each of these options takes effect.
789
790   Using NFS over UDP on high-speed links
791       Using NFS over UDP on high-speed links such as Gigabit can cause silent
792       data corruption.
793
794       The  problem  can be triggered at high loads, and is caused by problems
795       in IP fragment reassembly. NFS read and writes typically  transmit  UDP
796       packets of 4 Kilobytes or more, which have to be broken up into several
797       fragments in order to be sent over  the  Ethernet  link,  which  limits
798       packets  to  1500 bytes by default. This process happens at the IP net‐
799       work layer and is called fragmentation.
800
801       In order to identify fragments that belong together, IP assigns a 16bit
802       IP  ID  value  to  each  packet;  fragments generated from the same UDP
803       packet will have the same IP ID.  The  receiving  system  will  collect
804       these  fragments and combine them to form the original UDP packet. This
805       process is called reassembly. The default timeout for packet reassembly
806       is 30 seconds; if the network stack does not receive all fragments of a
807       given packet within this interval, it assumes the  missing  fragment(s)
808       got lost and discards those it already received.
809
810       The  problem  this creates over high-speed links is that it is possible
811       to send more than 65536 packets within 30 seconds. In fact, with  heavy
812       NFS  traffic  one can observe that the IP IDs repeat after about 5 sec‐
813       onds.
814
815       This has serious effects on reassembly:  if  one  fragment  gets  lost,
816       another  fragment  from a different packet but with the same IP ID will
817       arrive within the 30 second timeout, and the network stack will combine
818       these  fragments to form a new packet. Most of the time, network layers
819       above IP will detect this mismatched reassembly - in the case  of  UDP,
820       the  UDP  checksum,  which  is a 16 bit checksum over the entire packet
821       payload, will usually not match, and UDP will discard the bad packet.
822
823       However, the UDP checksum is 16 bit only, so there is a chance of 1  in
824       65536  that it will match even if the packet payload is completely ran‐
825       dom (which very often isn't the case). If that is the case, silent data
826       corruption will occur.
827
828       This potential should be taken seriously, at least on Gigabit Ethernet.
829       Network speeds of 100Mbit/s  should  be  considered  less  problematic,
830       because  with  most  traffic  patterns IP ID wrap around will take much
831       longer than 30 seconds.
832
833       It is therefore strongly recommended to use NFS over TCP  where  possi‐
834       ble, since TCP does not perform fragmentation.
835
836       If  you absolutely have to use NFS over UDP over Gigabit Ethernet, some
837       steps can be taken to mitigate the problem and reduce  the  probability
838       of corruption:
839
840       Jumbo frames:  Many  Gigabit  network cards are capable of transmitting
841                      frames bigger than the 1500 byte  limit  of  traditional
842                      Ethernet,  typically  9000  bytes. Using jumbo frames of
843                      9000 bytes will allow you to run NFS over UDP at a  page
844                      size  of  8K  without  fragmentation. Of course, this is
845                      only feasible if all  involved  stations  support  jumbo
846                      frames.
847
848                      To  enable  a machine to send jumbo frames on cards that
849                      support it, it is sufficient to configure the  interface
850                      for a MTU value of 9000.
851
852       Lower reassembly timeout:
853                      By  lowering this timeout below the time it takes the IP
854                      ID counter to wrap around, incorrect reassembly of frag‐
855                      ments  can  be prevented as well. To do so, simply write
856                      the  new  timeout  value  (in  seconds)  to   the   file
857                      /proc/sys/net/ipv4/ipfrag_time.
858
859                      A value of 2 seconds will greatly reduce the probability
860                      of IPID clashes on a single Gigabit  link,  while  still
861                      allowing  for  a reasonable timeout when receiving frag‐
862                      mented traffic from distant peers.
863

DATA AND METADATA COHERENCE

865       Some modern cluster file systems provide perfect cache coherence  among
866       their  clients.  Perfect cache coherence among disparate NFS clients is
867       expensive to achieve, especially on wide area networks.  As  such,  NFS
868       settles  for  weaker cache coherence that satisfies the requirements of
869       most file sharing types.
870
871   Close-to-open cache consistency
872       Typically file sharing is completely sequential.  First client A  opens
873       a  file,  writes  something to it, then closes it.  Then client B opens
874       the same file, and reads the changes.
875
876       When an application opens a file stored on an NFS version 3 server, the
877       NFS  client  checks that the file exists on the server and is permitted
878       to the opener by sending a GETATTR or ACCESS request.  The  NFS  client
879       sends  these  requests regardless of the freshness of the file's cached
880       attributes.
881
882       When the application closes the file, the NFS client  writes  back  any
883       pending  changes  to  the  file  so  that  the next opener can view the
884       changes.  This also gives the NFS client an opportunity to report write
885       errors to the application via the return code from close(2).
886
887       The  behavior  of  checking  at open time and flushing at close time is
888       referred to as close-to-open cache consistency, or CTO.  It can be dis‐
889       abled for an entire mount point using the nocto mount option.
890
891   Weak cache consistency
892       There  are  still  opportunities  for  a client's data cache to contain
893       stale data.  The NFS version 3 protocol introduced "weak cache  consis‐
894       tency" (also known as WCC) which provides a way of efficiently checking
895       a file's attributes before and after a single request.  This  allows  a
896       client  to  help  identify  changes  that could have been made by other
897       clients.
898
899       When a client is using many concurrent operations that update the  same
900       file  at the same time (for example, during asynchronous write behind),
901       it is still difficult to tell whether it was that client's  updates  or
902       some other client's updates that altered the file.
903
904   Attribute caching
905       Use  the  noac  mount option to achieve attribute cache coherence among
906       multiple clients.  Almost  every  file  system  operation  checks  file
907       attribute  information.  The client keeps this information cached for a
908       period of time to reduce network and server  load.   When  noac  is  in
909       effect,  a client's file attribute cache is disabled, so each operation
910       that needs to check a file's attributes is forced to  go  back  to  the
911       server.   This  permits a client to see changes to a file very quickly,
912       at the cost of many extra network operations.
913
914       Be careful not to confuse the noac option with "no data caching."   The
915       noac  mount  option prevents the client from caching file metadata, but
916       there are still races that may result in data cache incoherence between
917       client and server.
918
919       The  NFS  protocol  is not designed to support true cluster file system
920       cache coherence without some type  of  application  serialization.   If
921       absolute cache coherence among clients is required, applications should
922       use file locking. Alternatively, applications can also open their files
923       with the O_DIRECT flag to disable data caching entirely.
924
925   File timestamp maintainence
926       NFS  servers are responsible for managing file and directory timestamps
927       (atime, ctime, and mtime).  When a file is accessed or  updated  on  an
928       NFS  server,  the file's timestamps are updated just like they would be
929       on a filesystem local to an application.
930
931       NFS clients cache file  attributes,  including  timestamps.   A  file's
932       timestamps are updated on NFS clients when its attributes are retrieved
933       from the NFS server.  Thus there may be  some  delay  before  timestamp
934       updates on an NFS server appear to applications on NFS clients.
935
936       To  comply  with  the  POSIX  filesystem standard, the Linux NFS client
937       relies on NFS servers to keep a file's mtime and ctime timestamps prop‐
938       erly  up  to  date.  It does this by flushing local data changes to the
939       server before reporting mtime to applications via system calls such  as
940       stat(2).
941
942       The  Linux  client  handles  atime  updates more loosely, however.  NFS
943       clients maintain good performance by caching data, but that means  that
944       application  reads,  which  normally update atime, are not reflected to
945       the server where a file's atime is actually maintained.
946
947       Because of this caching behavior, the Linux NFS client does not support
948       generic atime-related mount options.  See mount(8) for details on these
949       options.
950
951       In particular, the atime/noatime, diratime/nodiratime, relatime/norela‐
952       time, and strictatime/nostrictatime mount options have no effect on NFS
953       mounts.
954
955       /proc/mounts may report that the relatime mount option is  set  on  NFS
956       mounts,  but  in fact the atime semantics are always as described here,
957       and are not like relatime semantics.
958
959   Directory entry caching
960       The Linux NFS client caches the result of all NFS LOOKUP requests.   If
961       the  requested  directory  entry  exists  on  the server, the result is
962       referred to as a positive lookup result.  If  the  requested  directory
963       entry  does  not  exist  on  the  server  (that is, the server returned
964       ENOENT), the result is referred to as negative lookup result.
965
966       To detect when directory entries have been  added  or  removed  on  the
967       server,  the  Linux  NFS  client  watches  a directory's mtime.  If the
968       client detects a change in a directory's mtime, the  client  drops  all
969       cached  LOOKUP results for that directory.  Since the directory's mtime
970       is a cached attribute, it may take some time before a client notices it
971       has  changed.  See the descriptions of the acdirmin, acdirmax, and noac
972       mount options for more information about how long a  directory's  mtime
973       is cached.
974
975       Caching directory entries improves the performance of applications that
976       do not share files with applications on other  clients.   Using  cached
977       information  about directories can interfere with applications that run
978       concurrently on multiple clients and need to  detect  the  creation  or
979       removal of files quickly, however.  The lookupcache mount option allows
980       some tuning of directory entry caching behavior.
981
982       Before kernel release 2.6.28, the Linux NFS client tracked  only  posi‐
983       tive  lookup results.  This permitted applications to detect new direc‐
984       tory entries created by other clients  quickly  while  still  providing
985       some of the performance benefits of caching.  If an application depends
986       on the previous lookup caching behavior of the Linux  NFS  client,  you
987       can use lookupcache=positive.
988
989       If  the client ignores its cache and validates every application lookup
990       request with the server, that client can immediately detect when a  new
991       directory  entry  has been either created or removed by another client.
992       You can specify this behavior using lookupcache=none.   The  extra  NFS
993       requests  needed  if  the  client  does not cache directory entries can
994       exact a performance penalty.  Disabling lookup caching should result in
995       less of a performance penalty than using noac, and has no effect on how
996       the NFS client caches the attributes of files.
997
998   The sync mount option
999       The NFS client treats the sync mount option differently than some other
1000       file  systems  (refer to mount(8) for a description of the generic sync
1001       and async mount options).  If neither sync nor async is  specified  (or
1002       if the async option is specified), the NFS client delays sending appli‐
1003       cation writes to the server until any of these events occur:
1004
1005              Memory pressure forces reclamation of system memory resources.
1006
1007              An  application  flushes  file  data  explicitly  with  sync(2),
1008              msync(2), or fsync(3).
1009
1010              An application closes a file with close(2).
1011
1012              The file is locked/unlocked via fcntl(2).
1013
1014       In other words, under normal circumstances, data written by an applica‐
1015       tion may not immediately appear on the server that hosts the file.
1016
1017       If the sync option is specified on a mount point, any system call  that
1018       writes data to files on that mount point causes that data to be flushed
1019       to the server before the system call returns  control  to  user  space.
1020       This provides greater data cache coherence among clients, but at a sig‐
1021       nificant performance cost.
1022
1023       Applications can use the O_SYNC open flag to force  application  writes
1024       to  individual files to go to the server immediately without the use of
1025       the sync mount option.
1026
1027   Using file locks with NFS
1028       The Network Lock Manager protocol is a separate sideband protocol  used
1029       to  manage  file locks in NFS version 2 and version 3.  To support lock
1030       recovery after a client or server reboot, a second sideband protocol --
1031       known  as  the Network Status Manager protocol -- is also required.  In
1032       NFS version 4, file locking is supported directly in the main NFS  pro‐
1033       tocol, and the NLM and NSM sideband protocols are not used.
1034
1035       In  most  cases, NLM and NSM services are started automatically, and no
1036       extra configuration is required.  Configure all NFS clients with fully-
1037       qualified  domain  names to ensure that NFS servers can find clients to
1038       notify them of server reboots.
1039
1040       NLM supports advisory file locks only.  To lock NFS files, use fcntl(2)
1041       with  the  F_GETLK  and F_SETLK commands.  The NFS client converts file
1042       locks obtained via flock(2) to advisory locks.
1043
1044       When mounting servers that do not support the  NLM  protocol,  or  when
1045       mounting  an  NFS server through a firewall that blocks the NLM service
1046       port, specify the nolock mount option. NLM  locking  must  be  disabled
1047       with  the  nolock option when using NFS to mount /var because /var con‐
1048       tains files used by the NLM implementation on Linux.
1049
1050       Specifying the nolock option may also be advised to improve the perfor‐
1051       mance  of  a  proprietary application which runs on a single client and
1052       uses file locks extensively.
1053
1054   NFS version 4 caching features
1055       The data and metadata caching behavior of NFS version 4 clients is sim‐
1056       ilar to that of earlier versions.  However, NFS version 4 adds two fea‐
1057       tures that improve cache behavior: change attributes and  file  delega‐
1058       tion.
1059
1060       The  change  attribute is a new part of NFS file and directory metadata
1061       which tracks data changes.  It replaces the use of a  file's  modifica‐
1062       tion  and  change time stamps as a way for clients to validate the con‐
1063       tent of their caches.  Change attributes are independent  of  the  time
1064       stamp resolution on either the server or client, however.
1065
1066       A  file  delegation  is  a contract between an NFS version 4 client and
1067       server that allows the client to treat a  file  temporarily  as  if  no
1068       other client is accessing it.  The server promises to notify the client
1069       (via a callback request) if another  client  attempts  to  access  that
1070       file.  Once a file has been delegated to a client, the client can cache
1071       that file's data  and  metadata  aggressively  without  contacting  the
1072       server.
1073
1074       File  delegations  come in two flavors: read and write.  A read delega‐
1075       tion means that the server notifies the client about any other  clients
1076       that  want  to  write  to  the file.  A write delegation means that the
1077       client gets notified about either read or write accessors.
1078
1079       Servers grant file delegations when a file is opened,  and  can  recall
1080       delegations  at  any  time when another client wants access to the file
1081       that conflicts with any delegations already  granted.   Delegations  on
1082       directories are not supported.
1083
1084       In  order to support delegation callback, the server checks the network
1085       return path to the client during the client's initial contact with  the
1086       server.   If  contact with the client cannot be established, the server
1087       simply does not grant any delegations to that client.
1088

SECURITY CONSIDERATIONS

1090       NFS servers control access to file data, but they depend on  their  RPC
1091       implementation  to provide authentication of NFS requests.  Traditional
1092       NFS access control mimics the standard mode bit access control provided
1093       in local file systems.  Traditional RPC authentication uses a number to
1094       represent each user (usually the user's own uid), a number to represent
1095       the  user's  group  (the  user's  gid), and a set of up to 16 auxiliary
1096       group numbers to represent other groups of which the user may be a mem‐
1097       ber.
1098
1099       Typically,  file  data  and user ID values appear unencrypted (i.e. "in
1100       the clear") on the network.  Moreover, NFS versions 2 and 3  use  sepa‐
1101       rate  sideband protocols for mounting, locking and unlocking files, and
1102       reporting system status of clients and servers.  These auxiliary proto‐
1103       cols use no authentication.
1104
1105       In  addition  to  combining  these sideband protocols with the main NFS
1106       protocol, NFS version 4 introduces more advanced forms of  access  con‐
1107       trol,  authentication, and in-transit data protection.  The NFS version
1108       4 specification mandates support for strong authentication and security
1109       flavors   that  provide  per-RPC  integrity  checking  and  encryption.
1110       Because NFS version 4 combines the function of the  sideband  protocols
1111       into  the main NFS protocol, the new security features apply to all NFS
1112       version 4 operations including  mounting,  file  locking,  and  so  on.
1113       RPCGSS  authentication  can also be used with NFS versions 2 and 3, but
1114       it does not protect their sideband protocols.
1115
1116       The sec mount option specifies the security flavor that is in effect on
1117       a  given  NFS  mount point.  Specifying sec=krb5 provides cryptographic
1118       proof of a user's identity in each RPC request.  This  provides  strong
1119       verification  of  the  identity  of users accessing data on the server.
1120       Note that additional configuration besides adding this mount option  is
1121       required   in   order  to  enable  Kerberos  security.   Refer  to  the
1122       rpc.gssd(8) man page for details.
1123
1124       Two additional flavors of Kerberos security are  supported:  krb5i  and
1125       krb5p.   The  krb5i security flavor provides a cryptographically strong
1126       guarantee that the data in each RPC request has not been tampered with.
1127       The  krb5p  security  flavor encrypts every RPC request to prevent data
1128       exposure during  network  transit;  however,  expect  some  performance
1129       impact  when  using  integrity checking or encryption.  Similar support
1130       for other forms of cryptographic security is also available.
1131
1132       The NFS version 4 protocol allows a client to renegotiate the  security
1133       flavor  when  the  client  crosses into a new filesystem on the server.
1134       The newly negotiated flavor effects only accesses of the  new  filesys‐
1135       tem.
1136
1137       Such negotiation typically occurs when a client crosses from a server's
1138       pseudo-fs into one of the server's exported physical filesystems, which
1139       often have more restrictive security settings than the pseudo-fs.
1140
1141   Using non-privileged source ports
1142       NFS  clients  usually communicate with NFS servers via network sockets.
1143       Each end of a socket is assigned a port value, which is simply a number
1144       between  1 and 65535 that distinguishes socket endpoints at the same IP
1145       address.  A socket is uniquely defined by a  tuple  that  includes  the
1146       transport protocol (TCP or UDP) and the port values and IP addresses of
1147       both endpoints.
1148
1149       The NFS client can choose any source port value for  its  sockets,  but
1150       usually  chooses  a privileged port.  A privileged port is a port value
1151       less than 1024.  Only a process  with  root  privileges  may  create  a
1152       socket with a privileged source port.
1153
1154       The exact range of privileged source ports that can be chosen is set by
1155       a pair of sysctls to avoid choosing a well-known port, such as the port
1156       used  by  ssh.  This means the number of source ports available for the
1157       NFS client, and therefore the number of socket connections that can  be
1158       used at the same time, is practically limited to only a few hundred.
1159
1160       As  described above, the traditional default NFS authentication scheme,
1161       known as AUTH_SYS, relies on sending local UID and GID numbers to iden‐
1162       tify  users  making NFS requests.  An NFS server assumes that if a con‐
1163       nection comes from a privileged port, the UID and GID  numbers  in  the
1164       NFS requests on this connection have been verified by the client's ker‐
1165       nel or some other local authority.  This is an easy  system  to  spoof,
1166       but on a trusted physical network between trusted hosts, it is entirely
1167       adequate.
1168
1169       Roughly speaking, one socket is used for each NFS mount  point.   If  a
1170       client  could  use  non-privileged  source ports as well, the number of
1171       sockets allowed, and  thus  the  maximum  number  of  concurrent  mount
1172       points, would be much larger.
1173
1174       Using  non-privileged source ports may compromise server security some‐
1175       what, since any user on AUTH_SYS mount points can now pretend to be any
1176       other  when  making NFS requests.  Thus NFS servers do not support this
1177       by default.  They explicitly allow it usually via an export option.
1178
1179       To retain good security while allowing as many mount points  as  possi‐
1180       ble,  it is best to allow non-privileged client connections only if the
1181       server and client both require strong authentication, such as Kerberos.
1182
1183   Mounting through a firewall
1184       A firewall may reside between an NFS client and server, or  the  client
1185       or  server  may block some of its own ports via IP filter rules.  It is
1186       still possible to mount an NFS server through a firewall,  though  some
1187       of  the  mount(8) command's automatic service endpoint discovery mecha‐
1188       nisms may not work; this requires  you  to  provide  specific  endpoint
1189       details via NFS mount options.
1190
1191       NFS  servers  normally  run a portmapper or rpcbind daemon to advertise
1192       their service endpoints to clients. Clients use the rpcbind  daemon  to
1193       determine:
1194
1195              What network port each RPC-based service is using
1196
1197              What transport protocols each RPC-based service supports
1198
1199       The  rpcbind daemon uses a well-known port number (111) to help clients
1200       find a service endpoint.  Although NFS often uses a standard port  num‐
1201       ber  (2049),  auxiliary services such as the NLM service can choose any
1202       unused port number at random.
1203
1204       Common firewall configurations block the well-known rpcbind  port.   In
1205       the  absense  of an rpcbind service, the server administrator fixes the
1206       port number of NFS-related services so  that  the  firewall  can  allow
1207       access to specific NFS service ports.  Client administrators then spec‐
1208       ify the port number for the mountd service via the  mount(8)  command's
1209       mountport  option.   It may also be necessary to enforce the use of TCP
1210       or UDP if the firewall blocks one of those transports.
1211
1212   NFS Access Control Lists
1213       Solaris allows NFS version 3 clients direct access to POSIX Access Con‐
1214       trol Lists stored in its local file systems.  This proprietary sideband
1215       protocol, known as NFSACL, provides richer  access  control  than  mode
1216       bits.   Linux  implements  this  protocol  for  compatibility  with the
1217       Solaris NFS implementation.  The NFSACL protocol never became  a  stan‐
1218       dard part of the NFS version 3 specification, however.
1219
1220       The  NFS  version 4 specification mandates a new version of Access Con‐
1221       trol Lists that are semantically richer than POSIX ACLs.  NFS version 4
1222       ACLs  are  not fully compatible with POSIX ACLs; as such, some transla‐
1223       tion between the two is required in an  environment  that  mixes  POSIX
1224       ACLs and NFS version 4.
1225

THE REMOUNT OPTION

1227       Generic  mount options such as rw and sync can be modified on NFS mount
1228       points using the remount option.  See mount(8) for more information  on
1229       generic mount options.
1230
1231       With  few  exceptions, NFS-specific options are not able to be modified
1232       during a remount.  The underlying transport or NFS  version  cannot  be
1233       changed by a remount, for example.
1234
1235       Performing a remount on an NFS file system mounted with the noac option
1236       may have unintended consequences.  The noac option is a combination  of
1237       the generic option sync, and the NFS-specific option actimeo=0.
1238
1239   Unmounting after a remount
1240       For  mount  points that use NFS versions 2 or 3, the NFS umount subcom‐
1241       mand depends on knowing the original set of mount options used to  per‐
1242       form  the  MNT  operation.  These options are stored on disk by the NFS
1243       mount subcommand, and can be erased by a remount.
1244
1245       To ensure that the saved mount options are not erased during a remount,
1246       specify  either  the  local mount directory, or the server hostname and
1247       export pathname, but not both, during a remount.  For example,
1248
1249               mount -o remount,ro /mnt
1250
1251       merges the mount option ro with the mount options already saved on disk
1252       for the NFS server mounted at /mnt.
1253

FILES

1255       /etc/fstab     file system table
1256
1257       /etc/nfsmount.conf
1258                      Configuration file for NFS mounts
1259

NOTES

1261       Before 2.4.7, the Linux NFS client did not support NFS over TCP.
1262
1263       Before  2.4.20,  the  Linux  NFS  client  used a heuristic to determine
1264       whether cached file data was still valid rather than using the standard
1265       close-to-open cache coherency method described above.
1266
1267       Starting with 2.4.22, the Linux NFS client employs a Van Jacobsen-based
1268       RTT estimator to determine retransmit timeout  values  when  using  NFS
1269       over UDP.
1270
1271       Before 2.6.0, the Linux NFS client did not support NFS version 4.
1272
1273       Before  2.6.8,  the  Linux  NFS  client used only synchronous reads and
1274       writes when the rsize and wsize settings were smaller than the system's
1275       page size.
1276
1277       The  Linux client's support for protocol versions depend on whether the
1278       kernel  was  built  with  options  CONFIG_NFS_V2,  CONFIG_NFS_V3,  CON‐
1279       FIG_NFS_V4, CONFIG_NFS_V4_1, and CONFIG_NFS_V4_2.
1280

SEE ALSO

1282       fstab(5), mount(8), umount(8), mount.nfs(5), umount.nfs(5), exports(5),
1283       nfsmount.conf(5),   netconfig(5),   ipv6(7),   nfsd(8),   sm-notify(8),
1284       rpc.statd(8), rpc.idmapd(8), rpc.gssd(8), rpc.svcgssd(8), kerberos(1)
1285
1286       RFC 768 for the UDP specification.
1287       RFC 793 for the TCP specification.
1288       RFC 1094 for the NFS version 2 specification.
1289       RFC 1813 for the NFS version 3 specification.
1290       RFC 1832 for the XDR specification.
1291       RFC 1833 for the RPC bind specification.
1292       RFC 2203 for the RPCSEC GSS API protocol specification.
1293       RFC 7530 for the NFS version 4.0 specification.
1294       RFC 5661 for the NFS version 4.1 specification.
1295       RFC 7862 for the NFS version 4.2 specification.
1296
1297
1298
1299                                9 October 2012                          NFS(5)
Impressum