1NVME-CONNECT-ALL(1) NVMe Manual NVME-CONNECT-ALL(1)
2
3
4
6 nvme-connect-all - Discover and Connect to Fabrics controllers.
7
9 nvme connect-all
10 [--transport=<trtype> | -t <trtype>]
11 [--nqn=<subnqn> | -n <subnqn>]
12 [--traddr=<traddr> | -a <traddr>]
13 [--trsvcid=<trsvcid> | -s <trsvcid>]
14 [--host-traddr=<traddr> | -w <traddr>]
15 [--host-iface=<iface> | -f <iface>]
16 [--hostnqn=<hostnqn> | -q <hostnqn>]
17 [--hostid=<hostid> | -I <hostid>]
18 [--raw=<filename> | -r <filename>]
19 [--cfg-file=<cfg> | -C <cfg>]
20 [--keep-alive-tmo=<#> | -k <#>]
21 [--reconnect-delay=<#> | -c <#>]
22 [--ctrl-loss-tmo=<#> | -l <#>]
23 [--hdr-digest | -g]
24 [--data-digest | -G]
25 [--nr-io-queues=<#> | -i <#>]
26 [--nr-write-queues=<#> | -W <#>]
27 [--nr-poll-queues=<#> | -P <#>]
28 [--queue-size=<#> | -Q <#>]
29 [--persistent | -p]
30 [--quiet | -S]
31 [--dump-config | -O]
32
34 Send one or more Discovery requests to a NVMe over Fabrics Discovery
35 Controller, and create controllers for the returned discovery records.
36
37 If no parameters are given, then nvme connect-all will attempt to find
38 a /etc/nvme/discovery.conf file to use to supply a list of connect-all
39 commands to run. If no /etc/nvme/discovery.conf file exists, the
40 command will quit with an error.
41
42 Otherwise a specific Discovery Controller should be specified using the
43 --transport, --traddr and if necessary the --trsvcid and a Discovery
44 request will be sent to the specified Discovery Controller.
45
46 See the documentation for the nvme-discover(1) command for further
47 background.
48
50 -t <trtype>, --transport=<trtype>
51 This field specifies the network fabric being used for a
52 NVMe-over-Fabrics network. Current string values include:
53
54 ┌──────┬────────────────────────────┐
55 │Value │ Definition │
56 ├──────┼────────────────────────────┤
57 │rdma │ The network fabric is an │
58 │ │ rdma network (RoCE, iWARP, │
59 │ │ Infiniband, basic rdma, │
60 │ │ etc) │
61 ├──────┼────────────────────────────┤
62 │fc │ WIP The network fabric is │
63 │ │ a Fibre Channel network. │
64 ├──────┼────────────────────────────┤
65 │tcp │ The network fabric is a │
66 │ │ TCP/IP network. │
67 ├──────┼────────────────────────────┤
68 │loop │ Connect to a NVMe over │
69 │ │ Fabrics target on the │
70 │ │ local host │
71 └──────┴────────────────────────────┘
72
73 -n <subnqn>, --nqn <subnqn>
74 This field specifies the name for the NVMe subsystem to connect to.
75
76 -a <traddr>, --traddr=<traddr>
77 This field specifies the network address of the Discovery
78 Controller. For transports using IP addressing (e.g. rdma) this
79 should be an IP-based address (ex. IPv4).
80
81 -s <trsvcid>, --trsvcid=<trsvcid>
82 This field specifies the transport service id. For transports using
83 IP addressing (e.g. rdma) this field is the port number. By
84 default, the IP port number for the RDMA transport is 4420.
85
86 -w <traddr>, --host-traddr=<traddr>
87 This field specifies the network address used on the host to
88 connect to the Controller. For TCP, this sets the source address on
89 the socket.
90
91 -f <iface>, --host-iface=<iface>
92 This field specifies the network interface used on the host to
93 connect to the Controller (e.g. IP eth1, enp2s0, enx78e7d1ea46da).
94 This forces the connection to be made on a specific interface
95 instead of letting the system decide.
96
97 -q <hostnqn>, --hostnqn=<hostnqn>
98 Overrides the default Host NQN that identifies the NVMe Host. If
99 this option is not specified, the default is read from
100 /etc/nvme/hostnqn first. If that does not exist, the autogenerated
101 NQN value from the NVMe Host kernel module is used next. The Host
102 NQN uniquely identifies the NVMe Host, and may be used by the the
103 Discovery Controller to control what NVMe Target resources are
104 allocated to the NVMe Host for a connection.
105
106 -I <hostid>, --hostid=<hostid>
107 UUID(Universally Unique Identifier) to be discovered which should
108 be formatted.
109
110 -r <filename>, --raw=<filename>
111 This field will take the output of the nvme connect-all command and
112 dump it to a raw binary file. By default nvme connect-all will dump
113 the output to stdout.
114
115 -C <cfg>, --config-file=<cfg>
116 Use the specified JSON configuration file instead of the default
117 /etc/nvme/config.json file or none to not read in an existing
118 configuration file. The JSON configuration file format is
119 documented in
120 https://github.com/linux-nvme/libnvme/doc/config-schema.json
121
122 -k <#>, --keep-alive-tmo=<#>
123 Overrides the default keep alive timeout (in seconds). This option
124 will be ignored for discovery, but will be passed on to the
125 subsequent connect call.
126
127 -c <#>, --reconnect-delay=<#>
128 Overrides the default delay (in seconds) before reconnect is
129 attempted after a connect loss.
130
131 -l <#>, --ctrl-loss-tmo=<#>
132 Overrides the default controller loss timeout period (in seconds).
133
134 -g, --hdr-digest
135 Generates/verifies header digest (TCP).
136
137 -G, --data-digest
138 Generates/verifies data digest (TCP).
139
140 -i <#>, --nr-io-queues=<#>
141 Overrides the default number of I/O queues create by the driver.
142 This option will be ignored for discovery, but will be passed on to
143 the subsequent connect call.
144
145 -W <#>, --nr-write-queues=<#>
146 Adds additional queues that will be used for write I/O.
147
148 -P <#>, --nr-poll-queues=<#>
149 Adds additional queues that will be used for polling latency
150 sensitive I/O.
151
152 -Q <#>, --queue-size=<#>
153 Overrides the default number of elements in the I/O queues created
154 by the driver. This option will be ignored for discovery, but will
155 be passed on to the subsequent connect call.
156
157 -p, --persistent
158 Don’t remove the discovery controller after retrieving the
159 discovery log page.
160
161 -S, --quiet
162 Suppress error messages.
163
164 -O, --dump-config
165 Print out resulting JSON configuration file to stdout.
166
168 • Connect to all records returned by the Discover Controller with IP4
169 address 192.168.1.3 for all resources allocated for NVMe Host name
170 host1-rogue-nqn on the RDMA network. Port 4420 is used by default:
171
172 # nvme connect-all --transport=rdma --traddr=192.168.1.3 \
173 --hostnqn=host1-rogue-nqn
174
175 • Issue a nvme connect-all command using a /etc/nvme/discovery.conf
176 file:
177
178 # Machine default 'nvme discover' commands. Query the
179 # Discovery Controller's two ports (some resources may only
180 # be accessible on a single port). Note an official
181 # nqn (Host) name defined in the NVMe specification is being used
182 # in this example.
183 -t rdma -a 192.168.69.33 -s 4420 -q nqn.2014-08.com.example:nvme:nvm-subsystem-sn-d78432
184 -t rdma -a 192.168.1.4 -s 4420 -q nqn.2014-08.com.example:nvme:nvm-subsystem-sn-d78432
185
186 At the prompt type "nvme connect-all".
187
189 nvme-discover(1) nvme-connect(1)
190
192 Part of the nvme-user suite
193
194
195
196NVMe 11/04/2022 NVME-CONNECT-ALL(1)