1NVME-CONNECT(1)                   NVMe Manual                  NVME-CONNECT(1)
2
3
4

NAME

6       nvme-connect - Connect to a Fabrics controller.
7

SYNOPSIS

9       nvme connect
10                       [--transport=<trtype>     | -t <trtype>]
11                       [--nqn=<subnqn>           | -n <subnqn>]
12                       [--traddr=<traddr>        | -a <traddr>]
13                       [--trsvcid=<trsvcid>      | -s <trsvcid>]
14                       [--host-traddr=<traddr>   | -w <traddr>]
15                       [--hostnqn=<hostnqn>      | -q <hostnqn>]
16                       [--hostid=<hostid>        | -I <hostid>]
17                       [--nr-io-queues=<#>       | -i <#>]
18                       [--nr-write-queues=<#>    | -W <#>]
19                       [--nr-poll-queues=<#>     | -P <#>]
20                       [--queue-size=<#>         | -Q <#>]
21                       [--keep-alive-tmo=<#>     | -k <#>]
22                       [--reconnect-delay=<#>    | -c <#>]
23                       [--ctrl-loss-tmo=<#>      | -l <#>]
24                       [--duplicate-connect      | -D]
25                       [--disable-sqflow         | -d]
26                       [--hdr-digest             | -g]
27                       [--data-digest            | -G]
28

DESCRIPTION

30       Create a transport connection to a remote system (specified by --traddr
31       and --trsvcid) and create a NVMe over Fabrics controller for the NVMe
32       subsystem specified by the --nqn option.
33

OPTIONS

35       -t <trtype>, --transport=<trtype>
36           This field specifies the network fabric being used for a
37           NVMe-over-Fabrics network. Current string values include:
38
39           ┌──────┬────────────────────────────┐
40           │Value │ Definition                 │
41           ├──────┼────────────────────────────┤
42           │rdma  │ The network fabric is an   │
43           │      │ rdma network (RoCE, iWARP, │
44           │      │ Infiniband, basic rdma,    │
45           │      │ etc)                       │
46           ├──────┼────────────────────────────┤
47           │fc    │ WIP The network fabric is  │
48           │      │ a Fibre Channel network.   │
49           ├──────┼────────────────────────────┤
50           │loop  │ Connect to a NVMe over     │
51           │      │ Fabrics target on the      │
52           │      │ local host                 │
53           └──────┴────────────────────────────┘
54
55       -n <subnqn>, --nqn <subnqn>
56           This field specifies the name for the NVMe subsystem to connect to.
57
58       -a <traddr>, --traddr=<traddr>
59           This field specifies the network address of the Controller. For
60           transports using IP addressing (e.g. rdma) this should be an
61           IP-based address (ex. IPv4).
62
63       -s <trsvcid>, --trsvcid=<trsvcid>
64           This field specifies the transport service id. For transports using
65           IP addressing (e.g. rdma) this field is the port number. By
66           default, the IP port number for the RDMA transport is 4420.
67
68       -w <traddr>, --host-traddr=<traddr>
69           This field specifies the network address used on the host to
70           connect to the Controller.
71
72       -q <hostnqn>, --hostnqn=<hostnqn>
73           Overrides the default Host NQN that identifies the NVMe Host. If
74           this option is not specified, the default is read from
75           /etc/nvme/hostnqn first. If that does not exist, the autogenerated
76           NQN value from the NVMe Host kernel module is used next. The Host
77           NQN uniquely identifies the NVMe Host.
78
79       -I <hostid>, --hostid=<hostid>
80           UUID(Universally Unique Identifier) to be discovered which should
81           be formatted.
82
83       -i <#>, --nr-io-queues=<#>
84           Overrides the default number of I/O queues create by the driver.
85
86       -W <#>, --nr-write-queues=<#>
87           Adds additional queues that will be used for write I/O.
88
89       -P <#>, --nr-poll-queues=<#>
90           Adds additional queues that will be used for polling latency
91           sensitive I/O.
92
93       -Q <#>, --queue-size=<#>
94           Overrides the default number of elements in the I/O queues created
95           by the driver.
96
97       -k <#>, --keep-alive-tmo=<#>
98           Overrides the default keep alive timeout (in seconds).
99
100       -c <#>, --reconnect-delay=<#>
101           Overrides the default delay (in seconds) before reconnect is
102           attempted after a connect loss.
103
104       -l <#>, --ctrl-loss-tmo=<#>
105           Overrides the default controller loss timeout period (in seconds).
106
107       -D, --duplicate-connect
108           Allows duplicated connections between same trnsport host and
109           subsystem port.
110
111       -d, --disable-sqflow
112           Disables SQ flow control to omit head doorbell update for
113           submission queues when sending nvme completions.
114
115       -g, --hdr-digest
116           Generates/verifies header digest (TCP).
117
118       -G, --data-digest
119           Generates/verifies data digest (TCP).
120

EXAMPLES

122       ·   Connect to a subsystem named
123           nqn.2014-08.com.example:nvme:nvm-subsystem-sn-d78432 on the IP4
124           address 192.168.1.3. Port 4420 is used by default:
125
126               # nvme connect --transport=rdma --traddr=192.168.1.3 \
127               --nqn=nqn.2014-08.com.example:nvme:nvm-subsystem-sn-d78432
128

SEE ALSO

130       nvme-discover(1) nvme-connect-all(1)
131

AUTHORS

133       This was co-written by Jay Freyensee[1] and Christoph Hellwig[2]
134

NVME

136       Part of the nvme-user suite
137

NOTES

139        1. Jay Freyensee
140           mailto:james.p.freyensee@intel.com
141
142        2. Christoph Hellwig
143           mailto:hch@lst.de
144
145
146
147NVMe                              01/07/2020                   NVME-CONNECT(1)
Impressum