1NVME-CONNECT-ALL(1) NVMe Manual NVME-CONNECT-ALL(1)
2
3
4
6 nvme-connect-all - Discover and Connect to Fabrics controllers.
7
9 nvme connect-all
10 [--transport=<trtype> | -t <trtype>]
11 [--nqn=<subnqn> | -n <subnqn>]
12 [--traddr=<traddr> | -a <traddr>]
13 [--trsvcid=<trsvcid> | -s <trsvcid>]
14 [--host-traddr=<traddr> | -w <traddr>]
15 [--host-iface=<iface> | -f <iface>]
16 [--hostnqn=<hostnqn> | -q <hostnqn>]
17 [--hostid=<hostid> | -I <hostid>]
18 [--raw=<filename> | -r <filename>]
19 [--device=<device> | -d <device>]
20 [--cfg-file=<cfg> | -C <cfg>]
21 [--keep-alive-tmo=<sec> | -k <sec>]
22 [--reconnect-delay=<#> | -c <#>]
23 [--ctrl-loss-tmo=<#> | -l <#>]
24 [--nr-io-queues=<#> | -i <#>]
25 [--nr-write-queues=<#> | -W <#>]
26 [--nr-poll-queues=<#> | -P <#>]
27 [--queue-size=<#> | -Q <#>]
28 [--keyring=<#> ]
29 [--tls_key=<#> ]
30 [--hdr-digest | -g]
31 [--data-digest | -G]
32 [--persistent | -p]
33 [--tls ]
34 [--quiet | -S]
35 [--dump-config | -O]
36 [--nbft]
37 [--no-nbft]
38 [--nbft-path=<STR>]
39 [--context=<STR>]
40
42 Send one or more Discovery requests to a NVMe over Fabrics Discovery
43 Controller, and create controllers for the returned discovery records.
44
45 If no parameters are given, then nvme connect-all will attempt to find
46 a /etc/nvme/discovery.conf file to use to supply a list of connect-all
47 commands to run. If no /etc/nvme/discovery.conf file exists, the
48 command will quit with an error.
49
50 Otherwise a specific Discovery Controller should be specified using the
51 --transport, --traddr and if necessary the --trsvcid and a Discovery
52 request will be sent to the specified Discovery Controller.
53
54 See the documentation for the nvme-discover(1) command for further
55 background.
56
58 -t <trtype>, --transport=<trtype>
59 This field specifies the network fabric being used for a
60 NVMe-over-Fabrics network. Current string values include:
61
62 ┌──────┬────────────────────────────┐
63 │Value │ Definition │
64 ├──────┼────────────────────────────┤
65 │rdma │ The network fabric is an │
66 │ │ rdma network (RoCE, iWARP, │
67 │ │ Infiniband, basic rdma, │
68 │ │ etc) │
69 ├──────┼────────────────────────────┤
70 │fc │ WIP The network fabric is │
71 │ │ a Fibre Channel network. │
72 ├──────┼────────────────────────────┤
73 │tcp │ The network fabric is a │
74 │ │ TCP/IP network. │
75 ├──────┼────────────────────────────┤
76 │loop │ Connect to a NVMe over │
77 │ │ Fabrics target on the │
78 │ │ local host │
79 └──────┴────────────────────────────┘
80
81 -n <subnqn>, --nqn <subnqn>
82 This field specifies the name for the NVMe subsystem to connect to.
83
84 -a <traddr>, --traddr=<traddr>
85 This field specifies the network address of the Discovery
86 Controller. For transports using IP addressing (e.g. rdma) this
87 should be an IP-based address (ex. IPv4).
88
89 -s <trsvcid>, --trsvcid=<trsvcid>
90 This field specifies the transport service id. For transports using
91 IP addressing (e.g. rdma) this field is the port number. By
92 default, the IP port number for the RDMA transport is 4420.
93
94 -w <traddr>, --host-traddr=<traddr>
95 This field specifies the network address used on the host to
96 connect to the Controller. For TCP, this sets the source address on
97 the socket.
98
99 -f <iface>, --host-iface=<iface>
100 This field specifies the network interface used on the host to
101 connect to the Controller (e.g. IP eth1, enp2s0, enx78e7d1ea46da).
102 This forces the connection to be made on a specific interface
103 instead of letting the system decide.
104
105 -q <hostnqn>, --hostnqn=<hostnqn>
106 Overrides the default Host NQN that identifies the NVMe Host. If
107 this option is not specified, the default is read from
108 /etc/nvme/hostnqn first. If that does not exist, the autogenerated
109 NQN value from the NVMe Host kernel module is used next. The Host
110 NQN uniquely identifies the NVMe Host, and may be used by the the
111 Discovery Controller to control what NVMe Target resources are
112 allocated to the NVMe Host for a connection.
113
114 -I <hostid>, --hostid=<hostid>
115 UUID(Universally Unique Identifier) to be discovered which should
116 be formatted.
117
118 -r <filename>, --raw=<filename>
119 This field will take the output of the nvme connect-all command and
120 dump it to a raw binary file. By default nvme connect-all will dump
121 the output to stdout.
122
123 -d <device>, --device=<device>
124 This field takes a device as input. It must be a persistent device
125 associated with a Discovery Controller previously created by the
126 command "connect-all" or "discover". <device> follows the format
127 nvme*, eg. nvme0, nvme1.
128
129 -C <cfg>, --config-file=<cfg>
130 Use the specified JSON configuration file instead of the default
131 /etc/nvme/config.json file or none to not read in an existing
132 configuration file. The JSON configuration file format is
133 documented in
134 https://github.com/linux-nvme/libnvme/doc/config-schema.json
135
136 -k <#>, --keep-alive-tmo=<#>
137 Overrides the default keep alive timeout (in seconds). This option
138 will be ignored for discovery, but will be passed on to the
139 subsequent connect call.
140
141 -c <#>, --reconnect-delay=<#>
142 Overrides the default delay (in seconds) before reconnect is
143 attempted after a connect loss.
144
145 -l <#>, --ctrl-loss-tmo=<#>
146 Overrides the default controller loss timeout period (in seconds).
147
148 -i <#>, --nr-io-queues=<#>
149 Overrides the default number of I/O queues create by the driver.
150 This option will be ignored for discovery, but will be passed on to
151 the subsequent connect call.
152
153 -W <#>, --nr-write-queues=<#>
154 Adds additional queues that will be used for write I/O.
155
156 -P <#>, --nr-poll-queues=<#>
157 Adds additional queues that will be used for polling latency
158 sensitive I/O.
159
160 -Q <#>, --queue-size=<#>
161 Overrides the default number of elements in the I/O queues created
162 by the driver. This option will be ignored for discovery, but will
163 be passed on to the subsequent connect call.
164
165 --keyring=<#>
166 Keyring for TLS key lookup.
167
168 --tls_key=<#>
169 TLS key for the connection (TCP).
170
171 -g, --hdr-digest
172 Generates/verifies header digest (TCP).
173
174 -G, --data-digest
175 Generates/verifies data digest (TCP).
176
177 -p, --persistent
178 Don’t remove the discovery controller after retrieving the
179 discovery log page.
180
181 --tls
182 Enable TLS encryption (TCP).
183
184 -S, --quiet
185 Suppress error messages.
186
187 -O, --dump-config
188 Print out resulting JSON configuration file to stdout.
189
190 --nbft
191 Only look at NBFT tables
192
193 --no-nbft
194 Do not look at NBFT tables
195
196 --nbft-path=<STR>
197 Use a user-defined path to the NBFT tables
198
199 --context <STR>
200 Set the execution context to <STR>. This allows to coordinate the
201 management of the global resources.
202
204 • Connect to all records returned by the Discover Controller with IP4
205 address 192.168.1.3 for all resources allocated for NVMe Host name
206 host1-rogue-nqn on the RDMA network. Port 4420 is used by default:
207
208 # nvme connect-all --transport=rdma --traddr=192.168.1.3 \
209 --hostnqn=host1-rogue-nqn
210
211 • Issue a nvme connect-all command using the default system defined
212 NBFT tables:
213
214 # nvme connect-all --nbft
215
216 • Issue a nvme connect-all command with a user-defined path for the
217 NBFT table:
218
219 # nvme connet-all --nbft-path=/sys/firmware/acpi/tables/NBFT1
220
221 • Issue a nvme connect-all command using a /etc/nvme/discovery.conf
222 file:
223
224 # Machine default 'nvme discover' commands. Query the
225 # Discovery Controller's two ports (some resources may only
226 # be accessible on a single port). Note an official
227 # nqn (Host) name defined in the NVMe specification is being used
228 # in this example.
229 -t rdma -a 192.168.69.33 -s 4420 -q nqn.2014-08.com.example:nvme:nvm-subsystem-sn-d78432
230 -t rdma -a 192.168.1.4 -s 4420 -q nqn.2014-08.com.example:nvme:nvm-subsystem-sn-d78432
231
232 At the prompt type "nvme connect-all".
233
235 nvme-discover(1) nvme-connect(1)
236
238 Part of the nvme-user suite
239
240
241
242NVMe 09/29/2023 NVME-CONNECT-ALL(1)