1NVME-DISCOVER(1) NVMe Manual NVME-DISCOVER(1)
2
3
4
6 nvme-discover - Send Get Log Page request to Discovery Controller.
7
9 nvme discover
10 [--transport=<trtype> | -t <trtype>]
11 [--nqn=<subnqn> | -n <subnqn>]
12 [--traddr=<traddr> | -a <traddr>]
13 [--trsvcid=<trsvcid> | -s <trsvcid>]
14 [--host-traddr=<traddr> | -w <traddr>]
15 [--host-iface=<iface> | -f <iface>]
16 [--hostnqn=<hostnqn> | -q <hostnqn>]
17 [--hostid=<hostid> | -I <hostid>]
18 [--raw=<filename> | -r <filename>]
19 [--device=<device> | -d <device>]
20 [--cfg-file=<cfg> | -C <cfg> ]
21 [--keep-alive-tmo=<sec> | -k <sec>]
22 [--reconnect-delay=<#> | -c <#>]
23 [--ctrl-loss-tmo=<#> | -l <#>]
24 [--nr-io-queues=<#> | -i <#>]
25 [--nr-write-queues=<#> | -W <#>]
26 [--nr-poll-queues=<#> | -P <#>]
27 [--queue-size=<#> | -Q <#>]
28 [--keyring=<#> ]
29 [--tls_key=<#> ]
30 [--hdr-digest | -g]
31 [--data-digest | -G]
32 [--persistent | -p]
33 [--quiet | -S]
34 [--tls ]
35 [--dump-config | -O]
36 [--output-format=<fmt> | -o <fmt>]
37 [--force]
38 [--nbft]
39 [--no-nbft]
40 [--nbft-path=<STR>]
41 [--context=<STR>]
42
44 Send one or more Get Log Page requests to a NVMe-over-Fabrics Discovery
45 Controller.
46
47 If no parameters are given, then nvme discover will attempt to find a
48 /etc/nvme/discovery.conf file to use to supply a list of Discovery
49 commands to run. If no /etc/nvme/discovery.conf file exists, the
50 command will quit with an error.
51
52 Otherwise, a specific Discovery Controller should be specified using
53 the --transport, --traddr, and if necessary the --trsvcid flags. A
54 Discovery request will then be sent to the specified Discovery
55 Controller.
56
58 The NVMe-over-Fabrics specification defines the concept of a Discovery
59 Controller that an NVMe Host can query on a fabric network to discover
60 NVMe subsystems contained in NVMe Targets which it can connect to on
61 the network. The Discovery Controller will return Discovery Log Pages
62 that provide the NVMe Host with specific information (such as network
63 address and unique subsystem NQN) the NVMe Host can use to issue an
64 NVMe connect command to connect itself to a storage resource contained
65 in that NVMe subsystem on the NVMe Target.
66
67 Note that the base NVMe specification defines the NQN (NVMe Qualified
68 Name) format which an NVMe endpoint (device, subsystem, etc) must
69 follow to guarantee a unique name under the NVMe standard. In
70 particular, the Host NQN uniquely identifies the NVMe Host, and may be
71 used by the Discovery Controller to control what NVMe Target resources
72 are allocated to the NVMe Host for a connection.
73
74 A Discovery Controller has it’s own NQN defined in the
75 NVMe-over-Fabrics specification, nqn.2014-08.org.nvmexpress.discovery.
76 All Discovery Controllers must use this NQN name. This NQN is used by
77 default by nvme-cli for the discover command.
78
80 -t <trtype>, --transport=<trtype>
81 This field specifies the network fabric being used for a
82 NVMe-over-Fabrics network. Current string values include:
83
84 ┌──────┬────────────────────────────┐
85 │Value │ Definition │
86 ├──────┼────────────────────────────┤
87 │rdma │ The network fabric is an │
88 │ │ rdma network (RoCE, iWARP, │
89 │ │ Infiniband, basic rdma, │
90 │ │ etc) │
91 ├──────┼────────────────────────────┤
92 │fc │ WIP The network fabric is │
93 │ │ a Fibre Channel network. │
94 ├──────┼────────────────────────────┤
95 │tcp │ The network fabric is a │
96 │ │ TCP/IP network. │
97 ├──────┼────────────────────────────┤
98 │loop │ Connect to a NVMe over │
99 │ │ Fabrics target on the │
100 │ │ local host │
101 └──────┴────────────────────────────┘
102
103 -n <subnqn>, --nqn <subnqn>
104 This field specifies the name for the NVMe subsystem to connect to.
105
106 -a <traddr>, --traddr=<traddr>
107 This field specifies the network address of the Discovery
108 Controller. For transports using IP addressing (e.g. rdma) this
109 should be an IP-based address (ex. IPv4).
110
111 -s <trsvcid>, --trsvcid=<trsvcid>
112 This field specifies the transport service id. For transports using
113 IP addressing (e.g. rdma) this field is the port number. By
114 default, the IP port number for the RDMA transport is 4420.
115
116 -w <traddr>, --host-traddr=<traddr>
117 This field specifies the network address used on the host to
118 connect to the Controller. For TCP, this sets the source address on
119 the socket.
120
121 -f <iface>, --host-iface=<iface>
122 This field specifies the network interface used on the host to
123 connect to the Controller (e.g. IP eth1, enp2s0, enx78e7d1ea46da).
124 This forces the connection to be made on a specific interface
125 instead of letting the system decide.
126
127 -q <hostnqn>, --hostnqn=<hostnqn>
128 Overrides the default host NQN that identifies the NVMe Host. If
129 this option is not specified, the default is read from
130 /etc/nvme/hostnqn first. If that does not exist, the autogenerated
131 NQN value from the NVMe Host kernel module is used next.
132
133 -I <hostid>, --hostid=<hostid>
134 UUID(Universally Unique Identifier) to be discovered which should
135 be formatted.
136
137 -r <filename>, --raw=<filename>
138 This field will take the output of the nvme discover command and
139 dump it to a raw binary file. By default nvme discover will dump
140 the output to stdout.
141
142 -d <device>, --device=<device>
143 This field takes a device as input. It must be a persistent device
144 associated with a Discovery Controller previously created by the
145 command "connect-all" or "discover". <device> follows the format
146 nvme*, eg. nvme0, nvme1.
147
148 -C <cfg>, --config-file=<cfg>
149 Use the specified JSON configuration file instead of the default
150 /etc/nvme/config.json file or none to not read in an existing
151 configuration file. The JSON configuration file format is
152 documented in
153 https://github.com/linux-nvme/libnvme/doc/config-schema.json
154
155 -k <#>, --keep-alive-tmo=<#>
156 Overrides the default keep alive timeout (in seconds). This option
157 will be ignored for discovery, and it is only implemented for
158 completeness.
159
160 -c <#>, --reconnect-delay=<#>
161 Overrides the default delay (in seconds) before reconnect is
162 attempted after a connect loss.
163
164 -l <#>, --ctrl-loss-tmo=<#>
165 Overrides the default controller loss timeout period (in seconds).
166
167 -i <#>, --nr-io-queues=<#>
168 Overrides the default number of I/O queues create by the driver.
169 This option will be ignored for the discovery, and it is only
170 implemented for completeness.
171
172 -W <#>, --nr-write-queues=<#>
173 Adds additional queues that will be used for write I/O.
174
175 -P <#>, --nr-poll-queues=<#>
176 Adds additional queues that will be used for polling latency
177 sensitive I/O.
178
179 -Q <#>, --queue-size=<#>
180 Overrides the default number of elements in the I/O queues created
181 by the driver which can be found at drivers/nvme/host/fabrics.h.
182 This option will be ignored for the discovery, and it is only
183 implemented for completeness.
184
185 --keyring=<#>
186 Keyring for TLS key lookup.
187
188 --tls_key=<#>
189 TLS key for the connection (TCP).
190
191 -g, --hdr-digest
192 Generates/verifies header digest (TCP).
193
194 -G, --data-digest
195 Generates/verifies data digest (TCP).
196
197 -p, --persistent
198 Don’t remove the discovery controller after retrieving the
199 discovery log page.
200
201 --tls
202 Enable TLS encryption (TCP).
203
204 -S, --quiet
205 Suppress already connected errors.
206
207 -O, --dump-config
208 Print out resulting JSON configuration file to stdout.
209
210 -o <format>, --output-format=<format>
211 Set the reporting format to normal, json, or binary. Only one
212 output format can be used at a time.
213
214 --force
215 Disable the built-in persistent discover connection rules. Combined
216 with --persistent flag, always create new persistent discovery
217 connection.
218
219 --nbft
220 Only look at NBFT tables
221
222 --no-nbft
223 Do not look at NBFT tables
224
225 --nbft-path=<STR>
226 Use a user-defined path to the NBFT tables
227
228 --context <STR>
229 Set the execution context to <STR>. This allows to coordinate the
230 management of the global resources.
231
233 • Query the Discover Controller with IP4 address 192.168.1.3 for all
234 resources allocated for NVMe Host name host1-rogue-nqn on the RDMA
235 network. Port 4420 is used by default:
236
237 # nvme discover --transport=rdma --traddr=192.168.1.3 \
238 --hostnqn=host1-rogue-nqn
239
240 • Issue a nvme discover command using the default system defined NBFT
241 tables:
242
243 # nvme discover --nbft
244
245 • Issue a nvme discover command with a user-defined path for the NBFT
246 table:
247
248 # nvme discover --nbft-path=/sys/firmware/acpi/tables/NBFT1
249
250 • Issue a nvme discover command using a /etc/nvme/discovery.conf
251 file:
252
253 # Machine default 'nvme discover' commands. Query the
254 # Discovery Controller's two ports (some resources may only
255 # be accessible on a single port). Note an official
256 # nqn (Host) name defined in the NVMe specification is being used
257 # in this example.
258 -t rdma -a 192.168.69.33 -s 4420 -q nqn.2014-08.com.example:nvme:nvm-subsystem-sn-d78432
259 -t rdma -a 192.168.1.4 -s 4420 -q nqn.2014-08.com.example:nvme:nvm-subsystem-sn-d78432
260
261 At the prompt type "nvme discover".
262
264 nvme-connect(1) nvme-connect-all(1)
265
267 This was written by Jay Freyensee[1]
268
270 Part of the nvme-user suite
271
273 1. Jay Freyensee
274 mailto:james.p.freyensee@intel.com
275
276
277
278NVMe 09/29/2023 NVME-DISCOVER(1)