1NVME-DISCOVER(1)                  NVMe Manual                 NVME-DISCOVER(1)
2
3
4

NAME

6       nvme-discover - Send Get Log Page request to Discovery Controller.
7

SYNOPSIS

9       nvme discover
10                       [--transport=<trtype>     | -t <trtype>]
11                       [--nqn=<subnqn>           | -n <subnqn>]
12                       [--traddr=<traddr>        | -a <traddr>]
13                       [--trsvcid=<trsvcid>      | -s <trsvcid>]
14                       [--host-traddr=<traddr>   | -w <traddr>]
15                       [--host-iface=<iface>     | -f <iface>]
16                       [--hostnqn=<hostnqn>      | -q <hostnqn>]
17                       [--hostid=<hostid>        | -I <hostid>]
18                       [--raw=<filename>         | -r <filename>]
19                       [--device=<device>        | -d <device>]
20                       [--cfg-file=<cfg>         | -C <cfg> ]
21                       [--keep-alive-tmo=<sec>   | -k <sec>]
22                       [--reconnect-delay=<#>    | -c <#>]
23                       [--ctrl-loss-tmo=<#>      | -l <#>]
24                       [--nr-io-queues=<#>       | -i <#>]
25                       [--nr-write-queues=<#>    | -W <#>]
26                       [--nr-poll-queues=<#>     | -P <#>]
27                       [--queue-size=<#>         | -Q <#>]
28                       [--keyring=<#>                    ]
29                       [--tls_key=<#>                    ]
30                       [--hdr-digest             | -g]
31                       [--data-digest            | -G]
32                       [--persistent             | -p]
33                       [--quiet                  | -S]
34                       [--tls                        ]
35                       [--dump-config            | -O]
36                       [--output-format=<fmt>    | -o <fmt>]
37                       [--force]
38                       [--nbft]
39                       [--no-nbft]
40                       [--nbft-path=<STR>]
41

DESCRIPTION

43       Send one or more Get Log Page requests to a NVMe-over-Fabrics Discovery
44       Controller.
45
46       If no parameters are given, then nvme discover will attempt to find a
47       /etc/nvme/discovery.conf file to use to supply a list of Discovery
48       commands to run. If no /etc/nvme/discovery.conf file exists, the
49       command will quit with an error.
50
51       Otherwise, a specific Discovery Controller should be specified using
52       the --transport, --traddr, and if necessary the --trsvcid flags. A
53       Discovery request will then be sent to the specified Discovery
54       Controller.
55

BACKGROUND

57       The NVMe-over-Fabrics specification defines the concept of a Discovery
58       Controller that an NVMe Host can query on a fabric network to discover
59       NVMe subsystems contained in NVMe Targets which it can connect to on
60       the network. The Discovery Controller will return Discovery Log Pages
61       that provide the NVMe Host with specific information (such as network
62       address and unique subsystem NQN) the NVMe Host can use to issue an
63       NVMe connect command to connect itself to a storage resource contained
64       in that NVMe subsystem on the NVMe Target.
65
66       Note that the base NVMe specification defines the NQN (NVMe Qualified
67       Name) format which an NVMe endpoint (device, subsystem, etc) must
68       follow to guarantee a unique name under the NVMe standard. In
69       particular, the Host NQN uniquely identifies the NVMe Host, and may be
70       used by the Discovery Controller to control what NVMe Target resources
71       are allocated to the NVMe Host for a connection.
72
73       A Discovery Controller has it’s own NQN defined in the
74       NVMe-over-Fabrics specification, nqn.2014-08.org.nvmexpress.discovery.
75       All Discovery Controllers must use this NQN name. This NQN is used by
76       default by nvme-cli for the discover command.
77

OPTIONS

79       -t <trtype>, --transport=<trtype>
80           This field specifies the network fabric being used for a
81           NVMe-over-Fabrics network. Current string values include:
82
83           ┌──────┬────────────────────────────┐
84           │Value │ Definition                 │
85           ├──────┼────────────────────────────┤
86           │rdma  │ The network fabric is an   │
87           │      │ rdma network (RoCE, iWARP, │
88           │      │ Infiniband, basic rdma,    │
89           │      │ etc)                       │
90           ├──────┼────────────────────────────┤
91           │fc    │ WIP The network fabric is  │
92           │      │ a Fibre Channel network.   │
93           ├──────┼────────────────────────────┤
94           │tcp   │ The network fabric is a    │
95           │      │ TCP/IP network.            │
96           ├──────┼────────────────────────────┤
97           │loop  │ Connect to a NVMe over     │
98           │      │ Fabrics target on the      │
99           │      │ local host                 │
100           └──────┴────────────────────────────┘
101
102       -n <subnqn>, --nqn <subnqn>
103           This field specifies the name for the NVMe subsystem to connect to.
104
105       -a <traddr>, --traddr=<traddr>
106           This field specifies the network address of the Discovery
107           Controller. For transports using IP addressing (e.g. rdma) this
108           should be an IP-based address (ex. IPv4).
109
110       -s <trsvcid>, --trsvcid=<trsvcid>
111           This field specifies the transport service id. For transports using
112           IP addressing (e.g. rdma) this field is the port number. By
113           default, the IP port number for the RDMA transport is 4420.
114
115       -w <traddr>, --host-traddr=<traddr>
116           This field specifies the network address used on the host to
117           connect to the Controller. For TCP, this sets the source address on
118           the socket.
119
120       -f <iface>, --host-iface=<iface>
121           This field specifies the network interface used on the host to
122           connect to the Controller (e.g. IP eth1, enp2s0, enx78e7d1ea46da).
123           This forces the connection to be made on a specific interface
124           instead of letting the system decide.
125
126       -q <hostnqn>, --hostnqn=<hostnqn>
127           Overrides the default host NQN that identifies the NVMe Host. If
128           this option is not specified, the default is read from
129           /etc/nvme/hostnqn first. If that does not exist, the autogenerated
130           NQN value from the NVMe Host kernel module is used next.
131
132       -I <hostid>, --hostid=<hostid>
133           UUID(Universally Unique Identifier) to be discovered which should
134           be formatted.
135
136       -r <filename>, --raw=<filename>
137           This field will take the output of the nvme discover command and
138           dump it to a raw binary file. By default nvme discover will dump
139           the output to stdout.
140
141       -d <device>, --device=<device>
142           This field takes a device as input. It must be a persistent device
143           associated with a Discovery Controller previously created by the
144           command "connect-all" or "discover". <device> follows the format
145           nvme*, eg. nvme0, nvme1.
146
147       -C <cfg>, --config-file=<cfg>
148           Use the specified JSON configuration file instead of the default
149           /etc/nvme/config.json file or none to not read in an existing
150           configuration file. The JSON configuration file format is
151           documented in
152           https://github.com/linux-nvme/libnvme/doc/config-schema.json
153
154       -k <#>, --keep-alive-tmo=<#>
155           Overrides the default keep alive timeout (in seconds). This option
156           will be ignored for discovery, and it is only implemented for
157           completeness.
158
159       -c <#>, --reconnect-delay=<#>
160           Overrides the default delay (in seconds) before reconnect is
161           attempted after a connect loss.
162
163       -l <#>, --ctrl-loss-tmo=<#>
164           Overrides the default controller loss timeout period (in seconds).
165
166       -i <#>, --nr-io-queues=<#>
167           Overrides the default number of I/O queues create by the driver.
168           This option will be ignored for the discovery, and it is only
169           implemented for completeness.
170
171       -W <#>, --nr-write-queues=<#>
172           Adds additional queues that will be used for write I/O.
173
174       -P <#>, --nr-poll-queues=<#>
175           Adds additional queues that will be used for polling latency
176           sensitive I/O.
177
178       -Q <#>, --queue-size=<#>
179           Overrides the default number of elements in the I/O queues created
180           by the driver which can be found at drivers/nvme/host/fabrics.h.
181           This option will be ignored for the discovery, and it is only
182           implemented for completeness.
183
184       --keyring=<#>
185           Keyring for TLS key lookup.
186
187       --tls_key=<#>
188           TLS key for the connection (TCP).
189
190       -g, --hdr-digest
191           Generates/verifies header digest (TCP).
192
193       -G, --data-digest
194           Generates/verifies data digest (TCP).
195
196       -p, --persistent
197           Don’t remove the discovery controller after retrieving the
198           discovery log page.
199
200       --tls
201           Enable TLS encryption (TCP).
202
203       -S, --quiet
204           Suppress already connected errors.
205
206       -O, --dump-config
207           Print out resulting JSON configuration file to stdout.
208
209       -o <format>, --output-format=<format>
210           Set the reporting format to normal, json, or binary. Only one
211           output format can be used at a time.
212
213       --force
214           Disable the built-in persistent discover connection rules. Combined
215           with --persistent flag, always create new persistent discovery
216           connection.
217
218       --nbft
219           Only look at NBFT tables
220
221       --no-nbft
222           Do not look at NBFT tables
223
224       --nbft-path=<STR>
225           Use a user-defined path to the NBFT tables
226

EXAMPLES

228       •   Query the Discover Controller with IP4 address 192.168.1.3 for all
229           resources allocated for NVMe Host name host1-rogue-nqn on the RDMA
230           network. Port 4420 is used by default:
231
232               # nvme discover --transport=rdma --traddr=192.168.1.3 \
233               --hostnqn=host1-rogue-nqn
234
235       •   Issue a nvme discover command using the default system defined NBFT
236           tables:
237
238               # nvme discover --nbft
239
240       •   Issue a nvme discover command with a user-defined path for the NBFT
241           table:
242
243               # nvme discover --nbft-path=/sys/firmware/acpi/tables/NBFT1
244
245       •   Issue a nvme discover command using a /etc/nvme/discovery.conf
246           file:
247
248               # Machine default 'nvme discover' commands.  Query the
249               # Discovery Controller's two ports (some resources may only
250               # be accessible on a single port).  Note an official
251               # nqn (Host) name defined in the NVMe specification is being used
252               # in this example.
253               -t rdma -a 192.168.69.33 -s 4420 -q nqn.2014-08.com.example:nvme:nvm-subsystem-sn-d78432
254               -t rdma -a 192.168.1.4   -s 4420 -q nqn.2014-08.com.example:nvme:nvm-subsystem-sn-d78432
255
256               At the prompt type "nvme discover".
257

SEE ALSO

259       nvme-connect(1) nvme-connect-all(1)
260

AUTHORS

262       This was written by Jay Freyensee[1]
263

NVME

265       Part of the nvme-user suite
266

NOTES

268        1. Jay Freyensee
269           mailto:james.p.freyensee@intel.com
270
271
272
273NVMe                              10/06/2023                  NVME-DISCOVER(1)
Impressum