1NVME-CONNECT-ALL(1) NVMe Manual NVME-CONNECT-ALL(1)
2
3
4
6 nvme-connect-all - Discover and Connect to Fabrics controllers.
7
9 nvme connect-all
10 [--transport=<trtype> | -t <trtype>]
11 [--nqn=<subnqn> | -n <subnqn>]
12 [--traddr=<traddr> | -a <traddr>]
13 [--trsvcid=<trsvcid> | -s <trsvcid>]
14 [--host-traddr=<traddr> | -w <traddr>]
15 [--host-iface=<iface> | -f <iface>]
16 [--hostnqn=<hostnqn> | -q <hostnqn>]
17 [--hostid=<hostid> | -I <hostid>]
18 [--raw=<filename> | -r <filename>]
19 [--device=<device> | -d <device>]
20 [--cfg-file=<cfg> | -C <cfg>]
21 [--keep-alive-tmo=<sec> | -k <sec>]
22 [--reconnect-delay=<#> | -c <#>]
23 [--ctrl-loss-tmo=<#> | -l <#>]
24 [--nr-io-queues=<#> | -i <#>]
25 [--nr-write-queues=<#> | -W <#>]
26 [--nr-poll-queues=<#> | -P <#>]
27 [--queue-size=<#> | -Q <#>]
28 [--keyring=<#> ]
29 [--tls_key=<#> ]
30 [--hdr-digest | -g]
31 [--data-digest | -G]
32 [--persistent | -p]
33 [--tls ]
34 [--quiet | -S]
35 [--dump-config | -O]
36 [--nbft]
37 [--no-nbft]
38 [--nbft-path=<STR>]
39
41 Send one or more Discovery requests to a NVMe over Fabrics Discovery
42 Controller, and create controllers for the returned discovery records.
43
44 If no parameters are given, then nvme connect-all will attempt to find
45 a /etc/nvme/discovery.conf file to use to supply a list of connect-all
46 commands to run. If no /etc/nvme/discovery.conf file exists, the
47 command will quit with an error.
48
49 Otherwise a specific Discovery Controller should be specified using the
50 --transport, --traddr and if necessary the --trsvcid and a Discovery
51 request will be sent to the specified Discovery Controller.
52
53 See the documentation for the nvme-discover(1) command for further
54 background.
55
57 -t <trtype>, --transport=<trtype>
58 This field specifies the network fabric being used for a
59 NVMe-over-Fabrics network. Current string values include:
60
61 ┌──────┬────────────────────────────┐
62 │Value │ Definition │
63 ├──────┼────────────────────────────┤
64 │rdma │ The network fabric is an │
65 │ │ rdma network (RoCE, iWARP, │
66 │ │ Infiniband, basic rdma, │
67 │ │ etc) │
68 ├──────┼────────────────────────────┤
69 │fc │ WIP The network fabric is │
70 │ │ a Fibre Channel network. │
71 ├──────┼────────────────────────────┤
72 │tcp │ The network fabric is a │
73 │ │ TCP/IP network. │
74 ├──────┼────────────────────────────┤
75 │loop │ Connect to a NVMe over │
76 │ │ Fabrics target on the │
77 │ │ local host │
78 └──────┴────────────────────────────┘
79
80 -n <subnqn>, --nqn <subnqn>
81 This field specifies the name for the NVMe subsystem to connect to.
82
83 -a <traddr>, --traddr=<traddr>
84 This field specifies the network address of the Discovery
85 Controller. For transports using IP addressing (e.g. rdma) this
86 should be an IP-based address (ex. IPv4).
87
88 -s <trsvcid>, --trsvcid=<trsvcid>
89 This field specifies the transport service id. For transports using
90 IP addressing (e.g. rdma) this field is the port number. By
91 default, the IP port number for the RDMA transport is 4420.
92
93 -w <traddr>, --host-traddr=<traddr>
94 This field specifies the network address used on the host to
95 connect to the Controller. For TCP, this sets the source address on
96 the socket.
97
98 -f <iface>, --host-iface=<iface>
99 This field specifies the network interface used on the host to
100 connect to the Controller (e.g. IP eth1, enp2s0, enx78e7d1ea46da).
101 This forces the connection to be made on a specific interface
102 instead of letting the system decide.
103
104 -q <hostnqn>, --hostnqn=<hostnqn>
105 Overrides the default Host NQN that identifies the NVMe Host. If
106 this option is not specified, the default is read from
107 /etc/nvme/hostnqn first. If that does not exist, the autogenerated
108 NQN value from the NVMe Host kernel module is used next. The Host
109 NQN uniquely identifies the NVMe Host, and may be used by the the
110 Discovery Controller to control what NVMe Target resources are
111 allocated to the NVMe Host for a connection.
112
113 -I <hostid>, --hostid=<hostid>
114 UUID(Universally Unique Identifier) to be discovered which should
115 be formatted.
116
117 -r <filename>, --raw=<filename>
118 This field will take the output of the nvme connect-all command and
119 dump it to a raw binary file. By default nvme connect-all will dump
120 the output to stdout.
121
122 -d <device>, --device=<device>
123 This field takes a device as input. It must be a persistent device
124 associated with a Discovery Controller previously created by the
125 command "connect-all" or "discover". <device> follows the format
126 nvme*, eg. nvme0, nvme1.
127
128 -C <cfg>, --config-file=<cfg>
129 Use the specified JSON configuration file instead of the default
130 /etc/nvme/config.json file or none to not read in an existing
131 configuration file. The JSON configuration file format is
132 documented in
133 https://github.com/linux-nvme/libnvme/doc/config-schema.json
134
135 -k <#>, --keep-alive-tmo=<#>
136 Overrides the default keep alive timeout (in seconds). This option
137 will be ignored for discovery, but will be passed on to the
138 subsequent connect call.
139
140 -c <#>, --reconnect-delay=<#>
141 Overrides the default delay (in seconds) before reconnect is
142 attempted after a connect loss.
143
144 -l <#>, --ctrl-loss-tmo=<#>
145 Overrides the default controller loss timeout period (in seconds).
146
147 -i <#>, --nr-io-queues=<#>
148 Overrides the default number of I/O queues create by the driver.
149 This option will be ignored for discovery, but will be passed on to
150 the subsequent connect call.
151
152 -W <#>, --nr-write-queues=<#>
153 Adds additional queues that will be used for write I/O.
154
155 -P <#>, --nr-poll-queues=<#>
156 Adds additional queues that will be used for polling latency
157 sensitive I/O.
158
159 -Q <#>, --queue-size=<#>
160 Overrides the default number of elements in the I/O queues created
161 by the driver. This option will be ignored for discovery, but will
162 be passed on to the subsequent connect call.
163
164 --keyring=<#>
165 Keyring for TLS key lookup.
166
167 --tls_key=<#>
168 TLS key for the connection (TCP).
169
170 -g, --hdr-digest
171 Generates/verifies header digest (TCP).
172
173 -G, --data-digest
174 Generates/verifies data digest (TCP).
175
176 -p, --persistent
177 Don’t remove the discovery controller after retrieving the
178 discovery log page.
179
180 --tls
181 Enable TLS encryption (TCP).
182
183 -S, --quiet
184 Suppress error messages.
185
186 -O, --dump-config
187 Print out resulting JSON configuration file to stdout.
188
189 --nbft
190 Only look at NBFT tables
191
192 --no-nbft
193 Do not look at NBFT tables
194
195 --nbft-path=<STR>
196 Use a user-defined path to the NBFT tables
197
199 • Connect to all records returned by the Discover Controller with IP4
200 address 192.168.1.3 for all resources allocated for NVMe Host name
201 host1-rogue-nqn on the RDMA network. Port 4420 is used by default:
202
203 # nvme connect-all --transport=rdma --traddr=192.168.1.3 \
204 --hostnqn=host1-rogue-nqn
205
206 • Issue a nvme connect-all command using the default system defined
207 NBFT tables:
208
209 # nvme connect-all --nbft
210
211 • Issue a nvme connect-all command with a user-defined path for the
212 NBFT table:
213
214 # nvme connet-all --nbft-path=/sys/firmware/acpi/tables/NBFT1
215
216 • Issue a nvme connect-all command using a /etc/nvme/discovery.conf
217 file:
218
219 # Machine default 'nvme discover' commands. Query the
220 # Discovery Controller's two ports (some resources may only
221 # be accessible on a single port). Note an official
222 # nqn (Host) name defined in the NVMe specification is being used
223 # in this example.
224 -t rdma -a 192.168.69.33 -s 4420 -q nqn.2014-08.com.example:nvme:nvm-subsystem-sn-d78432
225 -t rdma -a 192.168.1.4 -s 4420 -q nqn.2014-08.com.example:nvme:nvm-subsystem-sn-d78432
226
227 At the prompt type "nvme connect-all".
228
230 nvme-discover(1) nvme-connect(1)
231
233 Part of the nvme-user suite
234
235
236
237NVMe 10/06/2023 NVME-CONNECT-ALL(1)