1NVME-CONNECT(1)                   NVMe Manual                  NVME-CONNECT(1)
2
3
4

NAME

6       nvme-connect - Connect to a Fabrics controller.
7

SYNOPSIS

9       nvme connect
10                       [--transport=<trtype>     | -t <trtype>]
11                       [--nqn=<subnqn>           | -n <subnqn>]
12                       [--traddr=<traddr>        | -a <traddr>]
13                       [--trsvcid=<trsvcid>      | -s <trsvcid>]
14                       [--host-traddr=<traddr>   | -w <traddr>]
15                       [--host-iface=<iface>     | -f <iface>]
16                       [--hostnqn=<hostnqn>      | -q <hostnqn>]
17                       [--hostid=<hostid>        | -I <hostid>]
18                       [--config-file=<cfg>      | -J <cfg> ]
19                       [--dhchap-secret=<secret> | -S <secret>]
20                       [--dhchap-ctrl-secret=<secret> | -C <secret>]
21                       [--nr-io-queues=<#>       | -i <#>]
22                       [--nr-write-queues=<#>    | -W <#>]
23                       [--nr-poll-queues=<#>     | -P <#>]
24                       [--queue-size=<#>         | -Q <#>]
25                       [--keep-alive-tmo=<#>     | -k <#>]
26                       [--reconnect-delay=<#>    | -c <#>]
27                       [--ctrl-loss-tmo=<#>      | -l <#>]
28                       [--duplicate-connect      | -D]
29                       [--disable-sqflow         | -d]
30                       [--hdr-digest             | -g]
31                       [--data-digest            | -G]
32                       [--dump-config            | -O]
33                       [--output-format=<fmt>    | -o <fmt>]
34

DESCRIPTION

36       Create a transport connection to a remote system (specified by --traddr
37       and --trsvcid) and create a NVMe over Fabrics controller for the NVMe
38       subsystem specified by the --nqn option.
39

OPTIONS

41       -t <trtype>, --transport=<trtype>
42           This field specifies the network fabric being used for a
43           NVMe-over-Fabrics network. Current string values include:
44
45           ┌──────┬────────────────────────────┐
46           │Value │ Definition                 │
47           ├──────┼────────────────────────────┤
48           │rdma  │ The network fabric is an   │
49           │      │ rdma network (RoCE, iWARP, │
50           │      │ Infiniband, basic rdma,    │
51           │      │ etc)                       │
52           ├──────┼────────────────────────────┤
53           │fc    │ WIP The network fabric is  │
54           │      │ a Fibre Channel network.   │
55           ├──────┼────────────────────────────┤
56           │tcp   │ The network fabric is a    │
57           │      │ TCP/IP network.            │
58           ├──────┼────────────────────────────┤
59           │loop  │ Connect to a NVMe over     │
60           │      │ Fabrics target on the      │
61           │      │ local host                 │
62           └──────┴────────────────────────────┘
63
64       -n <subnqn>, --nqn <subnqn>
65           This field specifies the name for the NVMe subsystem to connect to.
66
67       -a <traddr>, --traddr=<traddr>
68           This field specifies the network address of the Controller. For
69           transports using IP addressing (e.g. rdma) this should be an
70           IP-based address (ex. IPv4).
71
72       -s <trsvcid>, --trsvcid=<trsvcid>
73           This field specifies the transport service id. For transports using
74           IP addressing (e.g. rdma) this field is the port number. By
75           default, the IP port number for the RDMA transport is 4420.
76
77       -w <traddr>, --host-traddr=<traddr>
78           This field specifies the network address used on the host to
79           connect to the Controller. For TCP, this sets the source address on
80           the socket.
81
82       -f <iface>, --host-iface=<iface>
83           This field specifies the network interface used on the host to
84           connect to the Controller (e.g. IP eth1, enp2s0, enx78e7d1ea46da).
85           This forces the connection to be made on a specific interface
86           instead of letting the system decide.
87
88       -q <hostnqn>, --hostnqn=<hostnqn>
89           Overrides the default Host NQN that identifies the NVMe Host. If
90           this option is not specified, the default is read from
91           /etc/nvme/hostnqn first. If that does not exist, the autogenerated
92           NQN value from the NVMe Host kernel module is used next. The Host
93           NQN uniquely identifies the NVMe Host.
94
95       -I <hostid>, --hostid=<hostid>
96           UUID(Universally Unique Identifier) to be discovered which should
97           be formatted.
98
99       -J <cfg>, --config-file=<cfg>
100           Use the specified JSON configuration file instead of the default
101           /etc/nvme/config.json file or none to not read in an existing
102           configuration file. The JSON configuration file format is
103           documented in
104           https://github.com/linux-nvme/libnvme/doc/config-schema.json
105
106       -S <secret>, --dhchap-secret=<secret>
107           NVMe In-band authentication secret; needs to be in ASCII format as
108           specified in NVMe 2.0 section 8.13.5.8 Secret representation. If
109           this option is not specified, the default is read from
110           /etc/nvme/hostkey. If that does not exist no in-band authentication
111           is attempted.
112
113       -C <secret>, --dhchap-ctrl-secret=<secret>
114           NVMe In-band authentication controller secret for bi-directional
115           authentication; needs to be in ASCII format as specified in NVMe
116           2.0 section 8.13.5.8 Secret representation. If not present
117           bi-directional authentication is not attempted.
118
119       -i <#>, --nr-io-queues=<#>
120           Overrides the default number of I/O queues create by the driver.
121
122       -W <#>, --nr-write-queues=<#>
123           Adds additional queues that will be used for write I/O.
124
125       -P <#>, --nr-poll-queues=<#>
126           Adds additional queues that will be used for polling latency
127           sensitive I/O.
128
129       -Q <#>, --queue-size=<#>
130           Overrides the default number of elements in the I/O queues created
131           by the driver.
132
133       -k <#>, --keep-alive-tmo=<#>
134           Overrides the default keep alive timeout (in seconds).
135
136       -c <#>, --reconnect-delay=<#>
137           Overrides the default delay (in seconds) before reconnect is
138           attempted after a connect loss.
139
140       -l <#>, --ctrl-loss-tmo=<#>
141           Overrides the default controller loss timeout period (in seconds).
142
143       -D, --duplicate-connect
144           Allows duplicated connections between same transport host and
145           subsystem port.
146
147       -d, --disable-sqflow
148           Disables SQ flow control to omit head doorbell update for
149           submission queues when sending nvme completions.
150
151       -g, --hdr-digest
152           Generates/verifies header digest (TCP).
153
154       -G, --data-digest
155           Generates/verifies data digest (TCP).
156
157       -O, --dump-config
158           Print out resulting JSON configuration file to stdout.
159
160       -o <format>, --output-format=<format>
161           Set the reporting format to normal or json. Only one output format
162           can be used at a time. When this option is specified, the device
163           associated with the connection will be printed. Nothing is printed
164           otherwise.
165

EXAMPLES

167       •   Connect to a subsystem named
168           nqn.2014-08.com.example:nvme:nvm-subsystem-sn-d78432 on the IP4
169           address 192.168.1.3. Port 4420 is used by default:
170
171               # nvme connect --transport=rdma --traddr=192.168.1.3 \
172               --nqn=nqn.2014-08.com.example:nvme:nvm-subsystem-sn-d78432
173

SEE ALSO

175       nvme-discover(1) nvme-connect-all(1)
176

AUTHORS

178       This was co-written by Jay Freyensee[1] and Christoph Hellwig[2]
179

NVME

181       Part of the nvme-user suite
182

NOTES

184        1. Jay Freyensee
185           mailto:james.p.freyensee@intel.com
186
187        2. Christoph Hellwig
188           mailto:hch@lst.de
189
190
191
192NVMe                              11/04/2022                   NVME-CONNECT(1)
Impressum