1NVME-CONNECT-ALL(1)               NVMe Manual              NVME-CONNECT-ALL(1)
2
3
4

NAME

6       nvme-connect-all - Discover and Connect to Fabrics controllers.
7

SYNOPSIS

9       nvme connect-all
10                       [--transport=<trtype>     | -t <trtype>]
11                       [--traddr=<traddr>        | -a <traddr>]
12                       [--trsvcid=<trsvcid>      | -s <trsvcid>]
13                       [--host-traddr=<traddr>   | -w <traddr>]
14                       [--hostnqn=<hostnqn>      | -q <hostnqn>]
15                       [--hostid=<hostid>        | -I <hostid>]
16                       [--raw=<filename>         | -r <filename>]
17                       [--keep-alive-tmo=<#>     | -k <#>]
18                       [--reconnect-delay=<#>    | -c <#>]
19                       [--ctrl-loss-tmo=<#>      | -l <#>]
20                       [--hdr-digest             | -g]
21                       [--data-digest            | -G]
22                       [--nr-io-queues=<#>       | -i <#>]
23                       [--nr-write-queues=<#>    | -W <#>]
24                       [--nr-poll-queues=<#>     | -P <#>]
25                       [--queue-size=<#>         | -Q <#>]
26

DESCRIPTION

28       Send one or more Discovery requests to a NVMe over Fabrics Discovery
29       Controller, and create controllers for the returned discovery records.
30
31       If no parameters are given, then nvme connect-all will attempt to find
32       a /etc/nvme/discovery.conf file to use to supply a list of connect-all
33       commands to run. If no /etc/nvme/discovery.conf file exists, the
34       command will quit with an error.
35
36       Otherwise a specific Discovery Controller should be specified using the
37       --transport, --traddr and if necessary the --trsvcid and a Diѕcovery
38       request will be sent to the specified Discovery Controller.
39
40       See the documentation for the nvme-discover(1) command for further
41       background.
42

OPTIONS

44       -t <trtype>, --transport=<trtype>
45           This field specifies the network fabric being used for a
46           NVMe-over-Fabrics network. Current string values include:
47
48           ┌──────┬────────────────────────────┐
49           │Value │ Definition                 │
50           ├──────┼────────────────────────────┤
51           │rdma  │ The network fabric is an   │
52           │      │ rdma network (RoCE, iWARP, │
53           │      │ Infiniband, basic rdma,    │
54           │      │ etc)                       │
55           ├──────┼────────────────────────────┤
56           │fc    │ WIP The network fabric is  │
57           │      │ a Fibre Channel network.   │
58           ├──────┼────────────────────────────┤
59           │loop  │ Connect to a NVMe over     │
60           │      │ Fabrics target on the      │
61           │      │ local host                 │
62           └──────┴────────────────────────────┘
63
64       -a <traddr>, --traddr=<traddr>
65           This field specifies the network address of the Discovery
66           Controller. For transports using IP addressing (e.g. rdma) this
67           should be an IP-based address (ex. IPv4).
68
69       -s <trsvcid>, --trsvcid=<trsvcid>
70           This field specifies the transport service id. For transports using
71           IP addressing (e.g. rdma) this field is the port number. By
72           default, the IP port number for the RDMA transport is 4420.
73
74       -w <traddr>, --host-traddr=<traddr>
75           This field specifies the network address used on the host to
76           connect to the Discovery Controller.
77
78       -q <hostnqn>, --hostnqn=<hostnqn>
79           Overrides the default Host NQN that identifies the NVMe Host. If
80           this option is not specified, the default is read from
81           /etc/nvme/hostnqn first. If that does not exist, the autogenerated
82           NQN value from the NVMe Host kernel module is used next. The Host
83           NQN uniquely identifies the NVMe Host, and may be used by the the
84           Discovery Controller to control what NVMe Target resources are
85           allocated to the NVMe Host for a connection.
86
87       -I <hostid>, --hostid=<hostid>
88           UUID(Universally Unique Identifier) to be discovered which should
89           be formatted.
90
91       -r <filename>, --raw=<filename>
92           This field will take the output of the nvme connect-all command and
93           dump it to a raw binary file. By default nvme connect-all will dump
94           the output to stdout.
95
96       -k <#>, --keep-alive-tmo=<#>
97           Overrides the default keep alive timeout (in seconds). This option
98           will be ignored for discovery, but will be passed on to the
99           subsequent connect call.
100
101       -c <#>, --reconnect-delay=<#>
102           Overrides the default delay (in seconds) before reconnect is
103           attempted after a connect loss.
104
105       -l <#>, --ctrl-loss-tmo=<#>
106           Overrides the default controller loss timeout period (in seconds).
107
108       -g, --hdr-digest
109           Generates/verifies header digest (TCP).
110
111       -G, --data-digest
112           Generates/verifies data digest (TCP).
113
114       -i <#>, --nr-io-queues=<#>
115           Overrides the default number of I/O queues create by the driver.
116           This option will be ignored for discovery, but will be passed on to
117           the subsequent connect call.
118
119       -W <#>, --nr-write-queues=<#>
120           Adds additional queues that will be used for write I/O.
121
122       -P <#>, --nr-poll-queues=<#>
123           Adds additional queues that will be used for polling latency
124           sensitive I/O.
125
126       -Q <#>, --queue-size=<#>
127           Overrides the default number of elements in the I/O queues created
128           by the driver. This option will be ignored for discovery, but will
129           be passed on to the subsequent connect call.
130

EXAMPLES

132       ·   Connect to all records returned by the Discover Controller with IP4
133           address 192.168.1.3 for all resources allocated for NVMe Host name
134           host1-rogue-nqn on the RDMA network. Port 4420 is used by default:
135
136               # nvme connect-all --transport=rdma --traddr=192.168.1.3 \
137               --hostnqn=host1-rogue-nqn
138
139       ·   Issue a nvme connect-all command using a /etc/nvme/discovery.conf
140           file:
141
142               # Machine default 'nvme discover' commands.  Query the
143               # Discovery Controller's two ports (some resources may only
144               # be accessible on a single port).  Note an official
145               # nqn (Host) name defined in the NVMe specification is being used
146               # in this example.
147               -t rdma -a 192.168.69.33 -s 4420 -q nqn.2014-08.com.example:nvme:nvm-subsystem-sn-d78432
148               -t rdma -a 192.168.1.4   -s 4420 -q nqn.2014-08.com.example:nvme:nvm-subsystem-sn-d78432
149
150               At the prompt type "nvme connect-all".
151

SEE ALSO

153       nvme-discover(1) nvme-connect(1)
154

NVME

156       Part of the nvme-user suite
157
158
159
160NVMe                              04/24/2020               NVME-CONNECT-ALL(1)
Impressum