1NVME-DISCOVER(1)                  NVMe Manual                 NVME-DISCOVER(1)
2
3
4

NAME

6       nvme-discover - Send Get Log Page request to Discovery Controller.
7

SYNOPSIS

9       nvme discover
10                       [--transport=<trtype>     | -t <trtype>]
11                       [--nqn=<subnqn>           | -n <subnqn>]
12                       [--traddr=<traddr>        | -a <traddr>]
13                       [--trsvcid=<trsvcid>      | -s <trsvcid>]
14                       [--host-traddr=<traddr>   | -w <traddr>]
15                       [--host-iface=<iface>     | -f <iface>]
16                       [--hostnqn=<hostnqn>      | -q <hostnqn>]
17                       [--hostid=<hostid>        | -I <hostid>]
18                       [--raw=<filename>         | -r <filename>]
19                       [--device=<device>        | -d <device>]
20                       [--cfg-file=<cfg>         | -C <cfg> ]
21                       [--keep-alive-tmo=<sec>   | -k <sec>]
22                       [--reconnect-delay=<#>    | -c <#>]
23                       [--ctrl-loss-tmo=<#>      | -l <#>]
24                       [--hdr_digest             | -g]
25                       [--data_digest            | -G]
26                       [--nr-io-queues=<#>       | -i <#>]
27                       [--nr-write-queues=<#>    | -W <#>]
28                       [--nr-poll-queues=<#>     | -P <#>]
29                       [--queue-size=<#>         | -Q <#>]
30                       [--persistent             | -p]
31                       [--quiet                  | -S]
32                       [--dump-config            | -O]
33                       [--output-format=<fmt>    | -o <fmt>]
34                       [--force]
35

DESCRIPTION

37       Send one or more Get Log Page requests to a NVMe-over-Fabrics Discovery
38       Controller.
39
40       If no parameters are given, then nvme discover will attempt to find a
41       /etc/nvme/discovery.conf file to use to supply a list of Discovery
42       commands to run. If no /etc/nvme/discovery.conf file exists, the
43       command will quit with an error.
44
45       Otherwise, a specific Discovery Controller should be specified using
46       the --transport, --traddr, and if necessary the --trsvcid flags. A
47       Diѕcovery request will then be sent to the specified Discovery
48       Controller.
49

BACKGROUND

51       The NVMe-over-Fabrics specification defines the concept of a Discovery
52       Controller that an NVMe Host can query on a fabric network to discover
53       NVMe subsystems contained in NVMe Targets which it can connect to on
54       the network. The Discovery Controller will return Discovery Log Pages
55       that provide the NVMe Host with specific information (such as network
56       address and unique subsystem NQN) the NVMe Host can use to issue an
57       NVMe connect command to connect itself to a storage resource contained
58       in that NVMe subsystem on the NVMe Target.
59
60       Note that the base NVMe specification defines the NQN (NVMe Qualified
61       Name) format which an NVMe endpoint (device, subsystem, etc) must
62       follow to guarantee a unique name under the NVMe standard. In
63       particular, the Host NQN uniquely identifies the NVMe Host, and may be
64       used by the the Discovery Controller to control what NVMe Target
65       resources are allocated to the NVMe Host for a connection.
66
67       A Discovery Controller has it’s own NQN defined in the
68       NVMe-over-Fabrics specification, nqn.2014-08.org.nvmexpress.discovery.
69       All Discovery Controllers must use this NQN name. This NQN is used by
70       default by nvme-cli for the discover command.
71

OPTIONS

73       -t <trtype>, --transport=<trtype>
74           This field specifies the network fabric being used for a
75           NVMe-over-Fabrics network. Current string values include:
76
77           ┌──────┬────────────────────────────┐
78           │Value │ Definition                 │
79           ├──────┼────────────────────────────┤
80           │rdma  │ The network fabric is an   │
81           │      │ rdma network (RoCE, iWARP, │
82           │      │ Infiniband, basic rdma,    │
83           │      │ etc)                       │
84           ├──────┼────────────────────────────┤
85           │fc    │ WIP The network fabric is  │
86           │      │ a Fibre Channel network.   │
87           ├──────┼────────────────────────────┤
88           │tcp   │ The network fabric is a    │
89           │      │ TCP/IP network.            │
90           ├──────┼────────────────────────────┤
91           │loop  │ Connect to a NVMe over     │
92           │      │ Fabrics target on the      │
93           │      │ local host                 │
94           └──────┴────────────────────────────┘
95
96       -n <subnqn>, --nqn <subnqn>
97           This field specifies the name for the NVMe subsystem to connect to.
98
99       -a <traddr>, --traddr=<traddr>
100           This field specifies the network address of the Discovery
101           Controller. For transports using IP addressing (e.g. rdma) this
102           should be an IP-based (ex. IPv4) address.
103
104       -s <trsvcid>, --trsvcid=<trsvcid>
105           This field specifies the transport service id. For transports using
106           IP addressing (e.g. rdma) this field is the port number. By
107           default, the IP port number for the RDMA transport is 4420.
108
109       -w <traddr>, --host-traddr=<traddr>
110           This field specifies the network address used on the host to
111           connect to the Controller. For TCP, this sets the source address on
112           the socket.
113
114       -f <iface>, --host-iface=<iface>
115           This field specifies the network interface used on the host to
116           connect to the Controller (e.g. IP eth1, enp2s0, enx78e7d1ea46da).
117           This forces the connection to be made on a specific interface
118           instead of letting the system decide.
119
120       -q <hostnqn>, --hostnqn=<hostnqn>
121           Overrides the default host NQN that identifies the NVMe Host. If
122           this option is not specified, the default is read from
123           /etc/nvme/hostnqn first. If that does not exist, the autogenerated
124           NQN value from the NVMe Host kernel module is used next.
125
126       -I <hostid>, --hostid=<hostid>
127           UUID(Universally Unique Identifier) to be discovered which should
128           be formatted.
129
130       -r <filename>, --raw=<filename>
131           This field will take the output of the nvme discover command and
132           dump it to a raw binary file. By default nvme discover will dump
133           the output to stdout.
134
135       -d <device>, --device=<device>
136           This field takes a device as input. Device is in the format of
137           nvme*, eg. nvme0, nvme1
138
139       -C <cfg>, --config-file=<cfg>
140           Use the specified JSON configuration file instead of the default
141           /etc/nvme/config.json file or none to not read in an existing
142           configuration file. The JSON configuration file format is
143           documented in
144           https://github.com/linux-nvme/libnvme/doc/config-schema.json
145
146       -k <#>, --keep-alive-tmo=<#>
147           Overrides the default dealy (in seconds) for keep alive. This
148           option will be ignored for the discovery, and it is only
149           implemented for completeness.
150
151       -c <#>, --reconnect-delay=<#>
152           Overrides the default delay (in seconds) before reconnect is
153           attempted after a connect loss.
154
155       -l <#>, --ctrl-loss-tmo=<#>
156           Overrides the default controller loss timeout period (in seconds).
157
158       -g, --hdr_digest
159           Generates/verifies header digest (TCP).
160
161       -G, --data_digest
162           Generates/verifies data digest (TCP).
163
164       -i <#>, --nr-io-queues=<#>
165           Overrides the default number of I/O queues create by the driver.
166           This option will be ignored for the discovery, and it is only
167           implemented for completeness.
168
169       -W <#>, --nr-write-queues=<#>
170           Adds additional queues that will be used for write I/O.
171
172       -P <#>, --nr-poll-queues=<#>
173           Adds additional queues that will be used for polling latency
174           sensitive I/O.
175
176       -Q <#>, --queue-size=<#>
177           Overrides the default number of elements in the I/O queues created
178           by the driver which can be found at drivers/nvme/host/fabrics.h.
179           This option will be ignored for the discovery, and it is only
180           implemented for completeness.
181
182       -p, --persistent
183           Persistent discovery connection.
184
185       -S, --quiet
186           Suppress already connected errors.
187
188       -O, --dump-config
189           Print out resulting JSON configuration file to stdout.
190
191       -o <format>, --output-format=<format>
192           Set the reporting format to normal, json, or binary. Only one
193           output format can be used at a time.
194
195       --force
196           Disable the built-in persitent discover connection rules. Combined
197           with --persistent flag, always create new persistent discovery
198           connection.
199

EXAMPLES

201       •   Query the Discover Controller with IP4 address 192.168.1.3 for all
202           resources allocated for NVMe Host name host1-rogue-nqn on the RDMA
203           network. Port 4420 is used by default:
204
205               # nvme discover --transport=rdma --traddr=192.168.1.3 \
206               --hostnqn=host1-rogue-nqn
207
208       •   Issue a nvme discover command using a /etc/nvme/discovery.conf
209           file:
210
211               # Machine default 'nvme discover' commands.  Query the
212               # Discovery Controller's two ports (some resources may only
213               # be accessible on a single port).  Note an official
214               # nqn (Host) name defined in the NVMe specification is being used
215               # in this example.
216               -t rdma -a 192.168.69.33 -s 4420 -q nqn.2014-08.com.example:nvme:nvm-subsystem-sn-d78432
217               -t rdma -a 192.168.1.4   -s 4420 -q nqn.2014-08.com.example:nvme:nvm-subsystem-sn-d78432
218
219               At the prompt type "nvme discover".
220

SEE ALSO

222       nvme-connect(1) nvme-connect-all(1)
223

AUTHORS

225       This was written by Jay Freyensee[1]
226

NVME

228       Part of the nvme-user suite
229

NOTES

231        1. Jay Freyensee
232           mailto:james.p.freyensee@intel.com
233
234
235
236NVMe                              04/11/2022                  NVME-DISCOVER(1)
Impressum